Forwarded from linkmeup
Казалось бы, избитая тема, как работают DNS-резолверы, но добавь туда Dual-Stack, закопайся в глубь кода Linux – и вот готово многостраничное чтиво о том, что, как все думали, работает очень просто.
Но эта история не про усложнить, а объяснить, как есть на низком уровне.
https://biriukov.dev/docs/resolver-dual-stack-application/0-sre-should-know-about-gnu-linux-resolvers-and-dual-stack-applications/
Но эта история не про усложнить, а объяснить, как есть на низком уровне.
https://biriukov.dev/docs/resolver-dual-stack-application/0-sre-should-know-about-gnu-linux-resolvers-and-dual-stack-applications/
Viacheslav Biriukov
What every SRE should know about GNU/Linux resolvers and Dual-Stack applications
What every SRE should know about GNU/Linux resolvers and Dual-Stack applications # In this series of posts, I’d like to make a deep dive into the GNU/Linux local facilities used to convert a domain name or hostname into IP addresses, specifically in the context…
Forwarded from Технологический Болт Генона
Пятница!
DOOM в который завезли
- Объемные воксели (Voxel DOOM Project - https://doom.fandom.com/wiki/Doom_voxel_project)
- Ray Tracing (PrBoom: Ray Traced - https://github.com/sultim-t/prboom-plus-rt + https://github.com/sultim-t/xash-rt)
Ссылка на moddb
https://www.moddb.com/mods/doom-2-ray-traced
+
https://github.com/vs-shirokii/gzdoom-rt/releases
Добавлю ссылок, по которым можно почитать о трассировке лучей и вокселях
3D Computer Graphics Primer: Ray-Tracing as an Example
https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/how-does-it-work.html
What is real-time ray tracing, and why should you care?
https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing
Voxel
https://cgitems.ru/articles/voxel-vokselnaya-grafika/
The Main Benefits and Disadvantages of Voxel Modeling
https://blog.spatial.com/the-main-benefits-and-disadvantages-of-voxel-modeling
DOOM в который завезли
- Объемные воксели (Voxel DOOM Project - https://doom.fandom.com/wiki/Doom_voxel_project)
- Ray Tracing (PrBoom: Ray Traced - https://github.com/sultim-t/prboom-plus-rt + https://github.com/sultim-t/xash-rt)
Ссылка на moddb
https://www.moddb.com/mods/doom-2-ray-traced
+
https://github.com/vs-shirokii/gzdoom-rt/releases
Добавлю ссылок, по которым можно почитать о трассировке лучей и вокселях
3D Computer Graphics Primer: Ray-Tracing as an Example
https://www.scratchapixel.com/lessons/3d-basic-rendering/introduction-to-ray-tracing/how-does-it-work.html
What is real-time ray tracing, and why should you care?
https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing
Voxel
https://cgitems.ru/articles/voxel-vokselnaya-grafika/
The Main Benefits and Disadvantages of Voxel Modeling
https://blog.spatial.com/the-main-benefits-and-disadvantages-of-voxel-modeling
Forwarded from Технологический Болт Генона
Testing Helm Charts Part I
https://grem1.in/post/helm-testing-pt1/
Testing Helm Charts Part II
https://grem1.in/post/helm-testing-pt2/
https://grem1.in/post/helm-testing-pt1/
Testing Helm Charts Part II
https://grem1.in/post/helm-testing-pt2/
Forwarded from Опенград
В этот раз статья также будет разбита на две подчасти и обе подчасти будут представлять собой, в основном, сугубо теоретиескую составляющую. А речь пойдет о пространствах имен в Linux. Ранее данная тема уже затрагивалась, когда мы говорили о безопасности в Docker, но здесь подробностей чуть больше выйдет. Через пару дней опубликую вторую подчасть.
Вообще перевод из блога Quarkslab's давно уже в сети, поскольку оригинальная статья была опбуликована ещё в 2021 году, но мне хотелось бы иметь на канале свою интерпретацию данного long read.
Вообще перевод из блога Quarkslab's давно уже в сети, поскольку оригинальная статья была опбуликована ещё в 2021 году, но мне хотелось бы иметь на канале свою интерпретацию данного long read.
Telegraph
Углублённое погружение в пространство имён в Linux (Часть 1)
Введение: Данная статья будет построена на основе следующих двух частей из блога Quarkslab's: https://blog.quarkslab.com/digging-into-linux-namespaces-part-1.html https://blog.quarkslab.com/digging-into-linux-namespaces-part-2.html Ранее мы уже затрагивали…
For a given CPU, the I/O wait time is the time during which that CPU was idle (i.e. didn’t execute any tasks) and there was at least one outstanding disk I/O operation requested by a task scheduled on that CPU (at the time it generated that I/O request).
https://veithen.io/2013/11/18/iowait-linux.html
#cpu #iowait
Andreas Veithen's blog
The precise meaning of I/O wait time in Linux
For a given CPU, the I/O wait time is the time during which that CPU was idle (i.e. didn't execute any tasks) and there was at least one outstanding disk I/O operation requested by a task scheduled on that CPU (at the time it generated that I/O request).
Forwarded from Код и Капуста
Что стоит за Ctrl+C и темная сторона прерываний в unix
#rust
https://sunshowers.io/posts/beyond-ctrl-c-signals/
#rust
https://sunshowers.io/posts/beyond-ctrl-c-signals/
5 Days To Virtualization: A Series On Hypervisor Development
Day 0: Setting up test environment, scripts, shared folders, and WinDbg shortcuts.
This article will detail what testing environment will be used throughout the series, how to setup both serial and network debugging, writing scripts to aid in efficient and fast stand-up for testing, and creating WinDbg shortcuts to quicken debugging init.
Day 1: Driver skeleton, introduction to virtualization, type definitions and the reasoning behind detailing them, compiling, and running unit tests.
This will provide a walk-through on constructing a basic driver skeleton for usage, explain the different pieces and their purpose, introduce virtualization and the various components in great detail, and then provide type definitions, their purpose, and where more information can be found. We’ll close with compiling and running unit tests to verify that everything is written properly and running as expected. (You may find this boring, but it is an incredibly important part of any project.)
Day 2: Writing communication channel for client and VMM, defining important context structures, detailed explanation of important VMM regions, allocating, building, and testing initialization of basic constructs on a single processor.
This will likely be lengthy, more lengthy than the previous two, and be the most important one to read all the way through. It will provide details on communicating with your driver, the importance of designing the structure of your hypervisor before implementation, and will explain the various VMM regions required for basic initialization and entrance into VMX operation. All of this will be done for a single processor to lower complexity, Day 3 will begin multi-processor initialization.
Day 3: Multi-processor initialization, setting up VMM regions on MP systems, error checking, VMX instructions, and the importance of unwinding actions.
In this article we’ll discuss multi-processor initialization, and the various ways to initialize all cores. As well as implementing solid error checking mechanisms and procedures, introducing VMX instructions and their nuances, and the article will conclude with the importance of unwinding actions (the ability to gracefully recover from errors, free what has been allocated, and return to a stable system state.)
Day 4: VMCS initialization, the differences in guest state versus host state, operation visualization, and the use of intrinsic functions.
This article will be detailed, fast paced, with lots of reference material. There is an entire chapter in the Intel SDM Volume 3C dedicated to VMCS encoding, init, and guest/host state operation. We will only be covering the very basics for understanding. I will also help the reader visualize how operation is entered, exited, and resumed to make it easier to understand the abstractness. We will also cover the implementation of important intrinsic functions required for setting up various guest and host VMCS components.
Day 5: Implementing unconditional vmexit handlers, and testing start-up and shutdown.
This day will cover the introduction and details on segmentation on Intel x86-64, demystifying initialization of guest and host segment data. Also, including implementation of the vmexit handler to handle unconditional exits that will occur, explaining the assembly stubs, and will conclude with a test of the hypervisor to verify that it starts and runs stably, and shuts down gracefully returning the system to a stable pre-operation state.
Day 0: Setting up test environment, scripts, shared folders, and WinDbg shortcuts.
This article will detail what testing environment will be used throughout the series, how to setup both serial and network debugging, writing scripts to aid in efficient and fast stand-up for testing, and creating WinDbg shortcuts to quicken debugging init.
Day 1: Driver skeleton, introduction to virtualization, type definitions and the reasoning behind detailing them, compiling, and running unit tests.
This will provide a walk-through on constructing a basic driver skeleton for usage, explain the different pieces and their purpose, introduce virtualization and the various components in great detail, and then provide type definitions, their purpose, and where more information can be found. We’ll close with compiling and running unit tests to verify that everything is written properly and running as expected. (You may find this boring, but it is an incredibly important part of any project.)
Day 2: Writing communication channel for client and VMM, defining important context structures, detailed explanation of important VMM regions, allocating, building, and testing initialization of basic constructs on a single processor.
This will likely be lengthy, more lengthy than the previous two, and be the most important one to read all the way through. It will provide details on communicating with your driver, the importance of designing the structure of your hypervisor before implementation, and will explain the various VMM regions required for basic initialization and entrance into VMX operation. All of this will be done for a single processor to lower complexity, Day 3 will begin multi-processor initialization.
Day 3: Multi-processor initialization, setting up VMM regions on MP systems, error checking, VMX instructions, and the importance of unwinding actions.
In this article we’ll discuss multi-processor initialization, and the various ways to initialize all cores. As well as implementing solid error checking mechanisms and procedures, introducing VMX instructions and their nuances, and the article will conclude with the importance of unwinding actions (the ability to gracefully recover from errors, free what has been allocated, and return to a stable system state.)
Day 4: VMCS initialization, the differences in guest state versus host state, operation visualization, and the use of intrinsic functions.
This article will be detailed, fast paced, with lots of reference material. There is an entire chapter in the Intel SDM Volume 3C dedicated to VMCS encoding, init, and guest/host state operation. We will only be covering the very basics for understanding. I will also help the reader visualize how operation is entered, exited, and resumed to make it easier to understand the abstractness. We will also cover the implementation of important intrinsic functions required for setting up various guest and host VMCS components.
Day 5: Implementing unconditional vmexit handlers, and testing start-up and shutdown.
This day will cover the introduction and details on segmentation on Intel x86-64, demystifying initialization of guest and host segment data. Also, including implementation of the vmexit handler to handle unconditional exits that will occur, explaining the assembly stubs, and will conclude with a test of the hypervisor to verify that it starts and runs stably, and shuts down gracefully returning the system to a stable pre-operation state.
Reverse Engineering
Day 0: Virtual Environment Setup, Scripts, and WinDbg - Reverse Engineering
Day 0 of a 7 day series of articles explaining the process of designing and implementing an Intel based type-2 hypervisor on Windows 10.