Technologique
OpenAGI summit at ETH Denver event. The updates from Sentient.xyz: Sandeep Nailwal speech about loyal AI principles: https://youtu.be/u2_-dUCb_Yk #AI #AGI #OpenAGI
OpenAGI summit at ETH Denver event.
The updates from Sentient.xyz:
Himanshu Tyagi presented a loyal AI implementation, a Sentient Chat:
https://youtu.be/UsuMbk32i44
#AI
#AGI
#OpenAGI
The updates from Sentient.xyz:
Himanshu Tyagi presented a loyal AI implementation, a Sentient Chat:
https://youtu.be/UsuMbk32i44
#AI
#AGI
#OpenAGI
YouTube
Experience Loyal AI | Himanshu Tyagi & Oleg Golev | Open AGI Summit | EthDenver 2025
Experience Loyal AI: Community-Aligned, Community-Owned, Community-Controlled with Himanshu Tyagi & Oleg Golev at the Open AGI Summit at EthDenver 2025.
Decentralization meets intelligence as Himanshu Tyagi from Sentient AGI discusses Loyal AI: Community…
Decentralization meets intelligence as Himanshu Tyagi from Sentient AGI discusses Loyal AI: Community…
Making truly open source AI, trustful, community driven, using top-notch TEE and blockchain technologies. That's the mission of Sentient.
Super interesting and informative podcast:
https://www.youtube.com/live/-P6sFtQRbl8
And you should take Sentient into serious consideration.
Give it a try!
https://chat.sentient.xyz
https://sentient.xyz
https://github.com/sentient-agi
https://huggingface.co/SentientAGI
#AI
#AGI
Super interesting and informative podcast:
https://www.youtube.com/live/-P6sFtQRbl8
And you should take Sentient into serious consideration.
Give it a try!
https://chat.sentient.xyz
https://sentient.xyz
https://github.com/sentient-agi
https://huggingface.co/SentientAGI
#AI
#AGI
YouTube
Is SENTIENT the Missing Piece in AI’s Future?
Follow Sentient -: https://x.com/SentientAGI
Follow Vivek -: https://x.com/vivekkolli
Follow Antonio -: https://x.com/antoniok1406
Follow RANA -: https://x.com/LeaderX_btc
Follow Ajaz -: https://x.com/AjazRubz
Follow Mohsin -: https://x.com/rohelkhan
…
Follow Vivek -: https://x.com/vivekkolli
Follow Antonio -: https://x.com/antoniok1406
Follow RANA -: https://x.com/LeaderX_btc
Follow Ajaz -: https://x.com/AjazRubz
Follow Mohsin -: https://x.com/rohelkhan
…
https://www.youtube.com/live/AyH7zoP-JOg
Great conversation!
The privacy and confidentiality should be a fundamental human right in the information and ubiquitous computations era.
Always think about how your data will be used, what you say, message and what you'll prompt to search engine or AI model, how it can be and will be used, especially against your interests.
#AI
#AGI
#privacy
#confidentiality
#confidential_computing
#CC
#security
Great conversation!
The privacy and confidentiality should be a fundamental human right in the information and ubiquitous computations era.
Always think about how your data will be used, what you say, message and what you'll prompt to search engine or AI model, how it can be and will be used, especially against your interests.
#AI
#AGI
#privacy
#confidentiality
#confidential_computing
#CC
#security
YouTube
The State of Personal Online Security and Confidentiality | SXSW LIVE
SXSW 2025 Livestream and On Demand Keynotes & Featured Speaker Sessions + VOD Portuguese and Spanish language translations presented by Itaú (these will be available within 48 hours of VOD publishing).
Think about it: everyone has something to hide these…
Think about it: everyone has something to hide these…
Amazing things has been released by Modular development team (Mojo language and Max inference backend):
https://www.modular.com/blog/max-25-2-unleash-the-power-of-your-h200s-without-cuda
#Mojo
#MAX
#AI
#AGI
https://www.modular.com/blog/max-25-2-unleash-the-power-of-your-h200s-without-cuda
#Mojo
#MAX
#AI
#AGI
Modular
Modular: MAX 25.2: Unleash the power of your H200's–without CUDA!
We’re excited to announce MAX 25.2, a major update that unlocks industry-leading performance on the largest language models–built from the ground up without CUDA.
Technologique
Amazing things has been released by Modular development team (Mojo language and Max inference backend): https://www.modular.com/blog/max-25-2-unleash-the-power-of-your-h200s-without-cuda #Mojo #MAX #AI #AGI
Modular provides MAX platform - it is MAX inference backend (engine) and MAX inference server (MAX Serve).
Just look at this:
https://builds.modular.com/models/DeepSeek-R1-Distill-Llama/8B-Q6_K
https://builds.modular.com/models/Llama-3.3-Instruct/70B?tab=deploy
In terms of deployment it is fantastic! Just one (relatively) tiny container!
And in terms of programming - GPU programming and acceleration without CUDA, using Mojo language (statically LLVM compiled), which has capabilities of Rust (static memory safety), LLVM MLIR (Multi-Level Intermediate Representation) byte code compilation for amazing low level code optimization and acceleration, syntax of Python and Mojo integrates (embrace) the whole Python ecosystem. I'm playing with Mojo for quite a while already (and it is best of both worlds - Rust and Python), but MAX just used recently. And Llama.cpp not even in comparison with MAX!
#Mojo
#MAX
#AI
#AGI
Just look at this:
https://builds.modular.com/models/DeepSeek-R1-Distill-Llama/8B-Q6_K
https://builds.modular.com/models/Llama-3.3-Instruct/70B?tab=deploy
In terms of deployment it is fantastic! Just one (relatively) tiny container!
And in terms of programming - GPU programming and acceleration without CUDA, using Mojo language (statically LLVM compiled), which has capabilities of Rust (static memory safety), LLVM MLIR (Multi-Level Intermediate Representation) byte code compilation for amazing low level code optimization and acceleration, syntax of Python and Mojo integrates (embrace) the whole Python ecosystem. I'm playing with Mojo for quite a while already (and it is best of both worlds - Rust and Python), but MAX just used recently. And Llama.cpp not even in comparison with MAX!
#Mojo
#MAX
#AI
#AGI
Modular
DeepSeek-R1-Distill-Llama-8B-Q6_K Model | MAX Builds
DeepSeek-R1 models improve reasoning through reinforcement learning and fine-tuning, outperforming major benchmarks.
Whoa! We need to update our kernels!
https://hoefler.dev/articles/vsock.html
https://security-tracker.debian.org/tracker/CVE-2025-21756
#kernel
#Linux
#VSock
https://hoefler.dev/articles/vsock.html
https://security-tracker.debian.org/tracker/CVE-2025-21756
#kernel
#Linux
#VSock
AI and AGI should be fully open sourced and loyal to builders and community!
The most important thing I should say and add to Steve's blog post is that AI should be open (now we see opposite things - a big tech concentrated AI market), free (as in freedom), monetizable and loyal, for creators/builders/developers good and for community win. And this is OML principle. And target goal of Sentient Foundation, who makes truly open AGI future, and already developed Dobby model (and Dobby is already free! =), Sentient Chat, Sentient OpenDeepSearch, OML Fingerprinting library, Agent Framework and Enclaves Framework (proud to be a leading part of it!).
And all of these parts of groundbreaking product portfolio and breakthroughs are made just within less than a year!
More good things to come! Stay turned!
https://steveklabnik.com/writing/i-am-disappointed-in-the-ai-discourse/
https://www.sentient.xyz
#AI
#AGI
#OpenAGI
The most important thing I should say and add to Steve's blog post is that AI should be open (now we see opposite things - a big tech concentrated AI market), free (as in freedom), monetizable and loyal, for creators/builders/developers good and for community win. And this is OML principle. And target goal of Sentient Foundation, who makes truly open AGI future, and already developed Dobby model (and Dobby is already free! =), Sentient Chat, Sentient OpenDeepSearch, OML Fingerprinting library, Agent Framework and Enclaves Framework (proud to be a leading part of it!).
And all of these parts of groundbreaking product portfolio and breakthroughs are made just within less than a year!
More good things to come! Stay turned!
https://steveklabnik.com/writing/i-am-disappointed-in-the-ai-discourse/
https://www.sentient.xyz
#AI
#AGI
#OpenAGI
Steveklabnik
I am disappointed in the AI discourse
Blog post: I am disappointed in the AI discourse by Steve Klabnik
Technologique
AI and AGI should be fully open sourced and loyal to builders and community! The most important thing I should say and add to Steve's blog post is that AI should be open (now we see opposite things - a big tech concentrated AI market), free (as in freedom)…
www.sentient.xyz
Sentient Labs
Sentient's mission is to ensure that Artificial General Intelligence is open-source and not controlled by any single entity.
AI is dangerously centralized.
Why building community aligned AI is really matter, and how web3 technologies can play the key role to resolving current situation with centralized AI, owned by tech giant companies, and instead help to create a community driven ecosystem for AI development.
https://x.com/oleg_golev/status/1944157582144246077
The podcast:
https://x.com/autonolas/status/1926675599172452539
#AI
#AGI
#OpenAGI
Why building community aligned AI is really matter, and how web3 technologies can play the key role to resolving current situation with centralized AI, owned by tech giant companies, and instead help to create a community driven ecosystem for AI development.
https://x.com/oleg_golev/status/1944157582144246077
The podcast:
https://x.com/autonolas/status/1926675599172452539
#AI
#AGI
#OpenAGI
X (formerly Twitter)
Oleg Golev 🔥.sentient.xyz (@oleg_golev) on X
AI is dangerously centralized.
Over 90% of AI development is currently captured by just 5 companies. They operate by extracting knowledge from every developer, researcher, and piece of data in the world.
Yet no human gets a voice, any alignment guarantees…
Over 90% of AI development is currently captured by just 5 companies. They operate by extracting knowledge from every developer, researcher, and piece of data in the world.
Yet no human gets a voice, any alignment guarantees…
The one technically great web calls service, written in Rust, using Actix and NATS:
https://videocall.rs
https://app.videocall.rs
https://github.com/security-union/videocall-rs
https://videocall.rs
https://app.videocall.rs
https://github.com/security-union/videocall-rs
www.videocall.rs
Home - videocall.rs
Leptos is a cutting-edge Rust web framework designed for building fast, reliable, web applications.
The data storage engine projects we're all waiting for!
I was expecting data storage engines and data warehouse solutions, cloud native solutions for data lakes, will be made using Rust, as systems language, in Rust community.
Long awaited stuff, for the whole time since 2015, stabilized Rust v1.0 compiler and Rust 2015 standard.
https://github.com/RustFS/RustFS
#Rust
#RustLang
#RustFS
I was expecting data storage engines and data warehouse solutions, cloud native solutions for data lakes, will be made using Rust, as systems language, in Rust community.
Long awaited stuff, for the whole time since 2015, stabilized Rust v1.0 compiler and Rust 2015 standard.
https://github.com/RustFS/RustFS
#Rust
#RustLang
#RustFS
GitHub
GitHub - rustfs/rustfs: 🚀2.3x faster than MinIO for 4KB object payloads. RustFS is an open-source, S3-compatible high-performance…
🚀2.3x faster than MinIO for 4KB object payloads. RustFS is an open-source, S3-compatible high-performance object storage system supporting migration and coexistence with other S3-compatible platfor...
AI anxiety
https://youtu.be/odUjxJy0YMo
Here's Geoffrey Hinton talking about the risks...
In fact, he defined and described the risks as a warning to Humanity, and the risks are as follows:
Access inequality to general artificial intelligence, i.e. AGI, is the most powerful of its forms, based on various specialized agents/models that interact with each other. OpenAI GPT4o, GPT4.1, o1, o3 and o4, GPT4.5 - are such models (DeepSeek R1 as well).
This means that only corporations will have access to such intelligence, but not people and the community.
Since proprietary models are closed, the community is offered a closed restricted model.
Only the corporation and partially the state have a full model.
And AI is actually the Fourth Industrial Revolution - it significantly increases labor productivity, due to very high-level automation.
Those who have access to it are both competitive and more efficient.
(Our startup, Sentient OpenAGI, is eager to solve this problem of unequal access to AI and create a platform that will contribute to the development of community-driven open AGI, based on decentralized web3 technologies.)
And there are risks of bad actors - like developing viruses and bio-weapons. Genetic selective weapons, etc. I.e. the conversion between the protein structure of virion shell and its cell receptors to RNA or DNA sequence of nucleotides (nucleic acid bases) is the task that already solved by neural networks, as it is mostly a combinatorial task.
This is not a joke or a fantasy anymore! All these are already existing technologies.
#AI
#AGI
https://youtu.be/odUjxJy0YMo
Here's Geoffrey Hinton talking about the risks...
In fact, he defined and described the risks as a warning to Humanity, and the risks are as follows:
Access inequality to general artificial intelligence, i.e. AGI, is the most powerful of its forms, based on various specialized agents/models that interact with each other. OpenAI GPT4o, GPT4.1, o1, o3 and o4, GPT4.5 - are such models (DeepSeek R1 as well).
This means that only corporations will have access to such intelligence, but not people and the community.
Since proprietary models are closed, the community is offered a closed restricted model.
Only the corporation and partially the state have a full model.
And AI is actually the Fourth Industrial Revolution - it significantly increases labor productivity, due to very high-level automation.
Those who have access to it are both competitive and more efficient.
(Our startup, Sentient OpenAGI, is eager to solve this problem of unequal access to AI and create a platform that will contribute to the development of community-driven open AGI, based on decentralized web3 technologies.)
And there are risks of bad actors - like developing viruses and bio-weapons. Genetic selective weapons, etc. I.e. the conversion between the protein structure of virion shell and its cell receptors to RNA or DNA sequence of nucleotides (nucleic acid bases) is the task that already solved by neural networks, as it is mostly a combinatorial task.
This is not a joke or a fantasy anymore! All these are already existing technologies.
#AI
#AGI
YouTube
Geoffrey Hinton shared an important message about potential risks with artificial intelligence
"If the benefits of the increased productivity can be shared equally, it will be a wonderful advance for all of humanity."Geoffrey Hinton conveyed an importa...
And the full speech of Geoffrey Hinton about AI anxiety, risks and warning to Humanity:
https://www.youtube.com/watch?v=IkdziSLYzHw
#AI
#AGI
https://www.youtube.com/watch?v=IkdziSLYzHw
#AI
#AGI
YouTube
Will AI outsmart human intelligence? - with 'Godfather of AI' Geoffrey Hinton
The 2024 Nobel winner explains what AI has learned from biological intelligence, and how it might one day surpass it.
This lecture will Premiere on Tuesday 22 July 2025, at 5.30pm BST. If you'd like to watch it now, ad-free, join as one of our Science Supporter…
This lecture will Premiere on Tuesday 22 July 2025, at 5.30pm BST. If you'd like to watch it now, ad-free, join as one of our Science Supporter…
Python 3.14
https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it
In short - new Python 3.14 it's awesome! Worth to update immediately!
3.14 is way much better in performance than any previous versions, has optionally enabled JIT (doesn't give too much performance boost, due to the too much dynamic nature of Python and vibrant run-time objects lifetimes) and optionally disabled GIL for multi-threading (installed as separately compiled binary in a system).
But PyPy JIT still outperform CPython.
Much love for Python anyways! 🙌 Python is a cross-system glue now!
Comparison with Rust is just for fun here - Python always will be much more slower, due to the dynamic types dispatch through vtables. And due to the dynamic nature Python always will allow run-time unexpected behavior and run-time crashes (thus should be covered thoroughly with tests for everything), while Rust is fully static (even
There are also more consistent benchmarking test suite across languages:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/box-plot-summary-charts.html
(They should update Python environment soon and we'll see 3.14 results - now 3,13 used.)
#Python
#Rust
https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it
In short - new Python 3.14 it's awesome! Worth to update immediately!
3.14 is way much better in performance than any previous versions, has optionally enabled JIT (doesn't give too much performance boost, due to the too much dynamic nature of Python and vibrant run-time objects lifetimes) and optionally disabled GIL for multi-threading (installed as separately compiled binary in a system).
But PyPy JIT still outperform CPython.
Much love for Python anyways! 🙌 Python is a cross-system glue now!
Comparison with Rust is just for fun here - Python always will be much more slower, due to the dynamic types dispatch through vtables. And due to the dynamic nature Python always will allow run-time unexpected behavior and run-time crashes (thus should be covered thoroughly with tests for everything), while Rust is fully static (even
Dyn trait impls checked by compiler in compile time) and fully type safe (in compile time, before running).There are also more consistent benchmarking test suite across languages:
https://benchmarksgame-team.pages.debian.net/benchmarksgame/box-plot-summary-charts.html
(They should update Python environment soon and we'll see 3.14 results - now 3,13 used.)
#Python
#Rust
Miguelgrinberg
Python 3.14 Is Here. How Fast Is It?
In November of 2024 I wrote a blog post titled "Is Python Really That Slow?", in which I tested several versions of Python and noted the steady progress the language has been making in terms of…
And there are even more comprehensive continuous benchmarking from TechEmpower, which measure performance for frameworks and libraries in different languages and ecosystems (JSON serialization, web requests/responses, DB requests and updates, etc.):
https://tfb-status.techempower.com/
https://www.techempower.com/benchmarks/#section=data-r23&a=2&test=update
https://tfb-status.techempower.com/results/d27544b6-7365-4269-a4d4-f908f0d21a3e
https://www.techempower.com/benchmarks/#section=test&runid=d27544b6-7365-4269-a4d4-f908f0d21a3e&a=2&test=update
#benchmark
#benchmarks
#benchmarking
#TechEmpower
https://tfb-status.techempower.com/
https://www.techempower.com/benchmarks/#section=data-r23&a=2&test=update
https://tfb-status.techempower.com/results/d27544b6-7365-4269-a4d4-f908f0d21a3e
https://www.techempower.com/benchmarks/#section=test&runid=d27544b6-7365-4269-a4d4-f908f0d21a3e&a=2&test=update
#benchmark
#benchmarks
#benchmarking
#TechEmpower
tfb-status.techempower.com
TFB Results Dashboard
Continuous Benchmarking results for the TechEmpower Web Framework Benchmarks Project.
Technologique
Python 3.14 https://blog.miguelgrinberg.com/post/python-3-14-is-here-how-fast-is-it In short - new Python 3.14 it's awesome! Worth to update immediately! 3.14 is way much better in performance than any previous versions, has optionally enabled JIT (doesn't…
NoGIL is definitely a huge leap forward!
From 3.13 GIL can be disabled... but for this we need customly build interpreter from sources. That's the point should be refined.
Cause not every main Linux distro now provide prebuilt packages, only Fedora (
This will provide
But... free-threaded CPython build is not thread safe!
Thread safety, i.e. managing shared mutable state for simultaneous threads, using locks, mutexes and other synchronization primitives - are fully on developer. Python code is thread safe. But CLang code (via FFI) and Python interpreter code itself, that written in CLang, can allow access to the same memory, for pointers in several threads, lead to data race and deadlocks. Also can lead to dead/hanging objects in memory and thus memory leaks in long uptimes.
And this will affect run-time and revealed only in run-time.
(While in Rust for example pointers/references are typed and type-safe, thus allocations/deallocations, objects lifetimes tracking, pointers/references to same data and memory regions, are tracked in compile time, via move semantics, which completely prevents dangling pointers.)
Thus memory sanitizers and threads sanitizers should be used for free-threaded CPython. And not all main/core libraries in PyPI now support free-threading.
https://docs.python.org/3/howto/free-threading-python.html
https://py-free-threading.github.io/installing-cpython/
https://py-free-threading.github.io/running-gil-disabled/
https://py-free-threading.github.io/debugging/
https://py-free-threading.github.io/thread_sanitizer/
#Python
#notes
From 3.13 GIL can be disabled... but for this we need customly build interpreter from sources. That's the point should be refined.
Cause not every main Linux distro now provide prebuilt packages, only Fedora (
python3.14-freethreading package), OpenSUSE (python314-nogil package), Ubuntu (python3.14-nogil package through external PPA) and Nix (python314FreeThreading package), in Gentoo via own ebuild, or in Arch via own pkgbuild script.This will provide
python3.14t with NoGIL enabled by default, and we can enable GIL with PYTHON_GIL environment variable or the command-line option -X gil for CPython.But... free-threaded CPython build is not thread safe!
Thread safety, i.e. managing shared mutable state for simultaneous threads, using locks, mutexes and other synchronization primitives - are fully on developer. Python code is thread safe. But CLang code (via FFI) and Python interpreter code itself, that written in CLang, can allow access to the same memory, for pointers in several threads, lead to data race and deadlocks. Also can lead to dead/hanging objects in memory and thus memory leaks in long uptimes.
And this will affect run-time and revealed only in run-time.
(While in Rust for example pointers/references are typed and type-safe, thus allocations/deallocations, objects lifetimes tracking, pointers/references to same data and memory regions, are tracked in compile time, via move semantics, which completely prevents dangling pointers.)
Thus memory sanitizers and threads sanitizers should be used for free-threaded CPython. And not all main/core libraries in PyPI now support free-threading.
https://docs.python.org/3/howto/free-threading-python.html
https://py-free-threading.github.io/installing-cpython/
https://py-free-threading.github.io/running-gil-disabled/
https://py-free-threading.github.io/debugging/
https://py-free-threading.github.io/thread_sanitizer/
#Python
#notes
Python documentation
Python support for free threading
Starting with the 3.13 release, CPython has support for a build of Python called free threading where the global interpreter lock(GIL) is disabled. Free-threaded execution allows for full utilizati...
The best local LLM inference setup:
4x Mac Studio (M3 Ultra, 512 GB of unified RAM), 2 TB of UMA RAM with RDMA
EXO 1.0 tooling for clustering, now with tensor parallelism enabled!
RDMA (Remote Direct Memory Access) though Thunderbolt 5 - clustering bottleneck eliminated
MLX inference acceleration (now with RDMA support!)
And... Mac OS 26.2
https://www.youtube.com/watch?v=A0onppIyHEg&t=3m10s
DeepSeek v3.2 8 bit quantization (original training quantization) at 25 tokens per second! Wow!
516 Watts at the peak of power usage!
Downside: cost of 50K USD for hardware. Still better than one or several H100/H200/B200 with limited non unified discrete memory architecture! =)
And such setup will work for way much cheaper Mac Minis (no RDMA yet and Thunderbolt 5, but will be added to new generations of M chips, now available in M4 Pro and higher and M3 Ultra)!
Apple way ahead of all again!
In couple of years this will be a common consumer setup for local LLM inference, using conventional hardware, an APUs from AMD and Intel+NVidia (with integrated CPU+GPU NVLink bus - an upcoming APU architecture), while Apple and NVidia will use Intel Fabs and TSMC fabrication.
The enclaves/TEE for hardware memory encryption will be part of such setups for confidential computing over confidential sensitive data.
#CPU
#GPU
#LLM
#TEE
4x Mac Studio (M3 Ultra, 512 GB of unified RAM), 2 TB of UMA RAM with RDMA
EXO 1.0 tooling for clustering, now with tensor parallelism enabled!
RDMA (Remote Direct Memory Access) though Thunderbolt 5 - clustering bottleneck eliminated
MLX inference acceleration (now with RDMA support!)
And... Mac OS 26.2
https://www.youtube.com/watch?v=A0onppIyHEg&t=3m10s
DeepSeek v3.2 8 bit quantization (original training quantization) at 25 tokens per second! Wow!
516 Watts at the peak of power usage!
Downside: cost of 50K USD for hardware. Still better than one or several H100/H200/B200 with limited non unified discrete memory architecture! =)
And such setup will work for way much cheaper Mac Minis (no RDMA yet and Thunderbolt 5, but will be added to new generations of M chips, now available in M4 Pro and higher and M3 Ultra)!
Apple way ahead of all again!
In couple of years this will be a common consumer setup for local LLM inference, using conventional hardware, an APUs from AMD and Intel+NVidia (with integrated CPU+GPU NVLink bus - an upcoming APU architecture), while Apple and NVidia will use Intel Fabs and TSMC fabrication.
The enclaves/TEE for hardware memory encryption will be part of such setups for confidential computing over confidential sensitive data.
#CPU
#GPU
#LLM
#TEE
YouTube
Apple JUST Dropped a Game-Changer
Every M3 Ultra Mac Studio and Mac Mini cluster I’ve built hit the same wall… until Apple quietly removed the bottleneck—and suddenly, we're running the real deal.
👀 Start learning cyber security with TryHackMe: https://tryhackme.com/ziskind Use my code "ZISKIND25"…
👀 Start learning cyber security with TryHackMe: https://tryhackme.com/ziskind Use my code "ZISKIND25"…
I've done a ton of job before New Year, on Enclaves Framework, and CDK Dev Stack (former CDK SOA Backend), closed most of the tech debt.
Made new init system on Rust (systemd inspired) for enclaves internal provisioning (services, processes).
Started Enclave's Engine development. This component is for Enclave's provisioning on host. (Think of it like Docker Engine with API, Docker Compose, YAML configurations, and containerd runtime, but for secure enclaves.) First iteration already published.
For now, Enclaves Framework is a turn key solution for AWS Nitro Enclaves, for making custom Nitro Enclave images (with custom kernel, init, SLC, proxies, attestation server, and other components) with reproducible builds (supply chain security).
With Enclaves Engine there's a goal to make the same level of usability for confidential VMs, based on KVM, QEMU and Firecracker VMM (think of it as of your own self-hosted Enclaves platform as turn-key solution).
So, delivering Docker like developer experience for Enclaves - this motto is evolving by recent efforts! 🙌
https://github.com/sentient-agi/Sentient-Enclaves-Framework
Some of my experiments will be here in my own profile:
https://github.com/andrcmdr/secure-enclaves-framework
https://github.com/andrcmdr/cdk-dev-stack
Covering everything with exhaustive comprehensive documentation - documentation amount (in lines) exceeded the amount of code already! That's funny! 😁
Refactored main components - Pipeline Secure Local Channel protocol (through VSock) client-server implementation, VSock TCP set of proxies, and Remote Attestation Web Server - made proper error handling and structural logging with tracing for all components, made dynamic VSock buffers allocation for Pipeline SLC, refactored the RA Web-Server to make it modular.
Published paper about multi-hop re-encryption and delegated decryption, about cryptography difficulties for content protection and DRM in application to AI content producers and consumers (for community driven AI).
And published another paper about GPU TEE, attestation, coherent and unified memory, and how it's cause current scalability difficulties for TEE systems.
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption_for_data_protection.proto.rs.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms_2nd_iteration.md
If some of these sparkling your interest - give me a hint and text me! I'm looking for the TEE companies, who will also adopt and use Enclaves Framework and Enclaves Engine.
I think to provide a container like (Docker grade) developer and user experience for enclaves (hardware isolation and memory encryption) technologies for AI and crypto apps and lowering the entry barrier to hardware isolation technologies - is a great mission and ultimate data security goal (especially in context of cryptography and secrets in-memory protection) for the upcoming decade.
So, feel free to reach me if this is interesting for you as well!
#Enclaves
#TEE
#AI
#Cryptography
#Crypto
Made new init system on Rust (systemd inspired) for enclaves internal provisioning (services, processes).
Started Enclave's Engine development. This component is for Enclave's provisioning on host. (Think of it like Docker Engine with API, Docker Compose, YAML configurations, and containerd runtime, but for secure enclaves.) First iteration already published.
For now, Enclaves Framework is a turn key solution for AWS Nitro Enclaves, for making custom Nitro Enclave images (with custom kernel, init, SLC, proxies, attestation server, and other components) with reproducible builds (supply chain security).
With Enclaves Engine there's a goal to make the same level of usability for confidential VMs, based on KVM, QEMU and Firecracker VMM (think of it as of your own self-hosted Enclaves platform as turn-key solution).
So, delivering Docker like developer experience for Enclaves - this motto is evolving by recent efforts! 🙌
https://github.com/sentient-agi/Sentient-Enclaves-Framework
Some of my experiments will be here in my own profile:
https://github.com/andrcmdr/secure-enclaves-framework
https://github.com/andrcmdr/cdk-dev-stack
Covering everything with exhaustive comprehensive documentation - documentation amount (in lines) exceeded the amount of code already! That's funny! 😁
Refactored main components - Pipeline Secure Local Channel protocol (through VSock) client-server implementation, VSock TCP set of proxies, and Remote Attestation Web Server - made proper error handling and structural logging with tracing for all components, made dynamic VSock buffers allocation for Pipeline SLC, refactored the RA Web-Server to make it modular.
Published paper about multi-hop re-encryption and delegated decryption, about cryptography difficulties for content protection and DRM in application to AI content producers and consumers (for community driven AI).
And published another paper about GPU TEE, attestation, coherent and unified memory, and how it's cause current scalability difficulties for TEE systems.
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption_for_data_protection.proto.rs.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms.md
https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms_2nd_iteration.md
If some of these sparkling your interest - give me a hint and text me! I'm looking for the TEE companies, who will also adopt and use Enclaves Framework and Enclaves Engine.
I think to provide a container like (Docker grade) developer and user experience for enclaves (hardware isolation and memory encryption) technologies for AI and crypto apps and lowering the entry barrier to hardware isolation technologies - is a great mission and ultimate data security goal (especially in context of cryptography and secrets in-memory protection) for the upcoming decade.
So, feel free to reach me if this is interesting for you as well!
#Enclaves
#TEE
#AI
#Cryptography
#Crypto
GitHub
GitHub - sentient-agi/Sentient-Enclaves-Framework: Sentient Enclaves Framework for Confidential AI & Crypto Apps
Sentient Enclaves Framework for Confidential AI & Crypto Apps - sentient-agi/Sentient-Enclaves-Framework
Technologique pinned «I've done a ton of job before New Year, on Enclaves Framework, and CDK Dev Stack (former CDK SOA Backend), closed most of the tech debt. Made new init system on Rust (systemd inspired) for enclaves internal provisioning (services, processes). Started Enclave's…»
My year on #GitHub, since December 31st 2024 'till December 31st 2025.
Working as a systems developer (using Rust) in AI startup company (Sentient, https://www.sentient.xyz), on confidential AI infrastructure and engines (for CVM and TEE), and on blockchain backend (L1/L2). Contributing to open source.
(Rest only at the beginning on January (New Year's holidays), and beginning of May (May holidays).)
Filter noise, focus only on signal. Be steady and stay consistent in your efforts! Everything is reachable!
Making Open Source AI/AGI Win!
Working as a systems developer (using Rust) in AI startup company (Sentient, https://www.sentient.xyz), on confidential AI infrastructure and engines (for CVM and TEE), and on blockchain backend (L1/L2). Contributing to open source.
(Rest only at the beginning on January (New Year's holidays), and beginning of May (May holidays).)
Filter noise, focus only on signal. Be steady and stay consistent in your efforts! Everything is reachable!
Making Open Source AI/AGI Win!