UVU is an extremely fast and lightweight test runner for Node.js and the browser: Ultimate Velocity, Unleashed
Features:
- Super lightweight
- Extremely performant
- Individually executable test files
- Supports async/await tests
- Supports native ES Modules
- Browser-Compatible
- Familiar API
https://github.com/lukeed/uvu
#js
Features:
- Super lightweight
- Extremely performant
- Individually executable test files
- Supports async/await tests
- Supports native ES Modules
- Browser-Compatible
- Familiar API
https://github.com/lukeed/uvu
#js
The tiny framework for building hypertext applications.
Features:
- Do more with less—We have minimized the concepts you need to learn to get stuff done. Views, actions, effects, and subscriptions are all pretty easy to get to grips with and work together seamlessly
- Write what, not how—With a declarative API that's easy to read and fun to write, Hyperapp is the best way to build purely functional, feature-rich, browser-based apps in #js
- Smaller than a favicon—1 kB, give or take. Hyperapp is an ultra-lightweight Virtual DOM, highly-optimized diff algorithm, and state management library obsessed with minimalism
https://github.com/jorgebucaran/hyperapp
Features:
- Do more with less—We have minimized the concepts you need to learn to get stuff done. Views, actions, effects, and subscriptions are all pretty easy to get to grips with and work together seamlessly
- Write what, not how—With a declarative API that's easy to read and fun to write, Hyperapp is the best way to build purely functional, feature-rich, browser-based apps in #js
- Smaller than a favicon—1 kB, give or take. Hyperapp is an ultra-lightweight Virtual DOM, highly-optimized diff algorithm, and state management library obsessed with minimalism
https://github.com/jorgebucaran/hyperapp
⚡Breaking news!
#svelte now officially supports #ts!
TypeScript support in Svelte has been possible for a long time, but you had to mix a lot of disparate tools together and each project ran independently. Today, nearly all of these tools live under the Svelte organization and are maintained by a set of people who take responsibility over the whole pipeline and have common goals.
When we say that Svelte now supports TypeScript, we mean a few different things:
- You can use TypeScript inside your
- Components with TypeScript can be type-checked with the
- You get autocompletion hints and type-checking as you're writing components, even in expressions inside markup
- TypeScript files understand the Svelte component API — no more red squiggles when you import a
https://svelte.dev/blog/svelte-and-typescript
#svelte now officially supports #ts!
TypeScript support in Svelte has been possible for a long time, but you had to mix a lot of disparate tools together and each project ran independently. Today, nearly all of these tools live under the Svelte organization and are maintained by a set of people who take responsibility over the whole pipeline and have common goals.
When we say that Svelte now supports TypeScript, we mean a few different things:
- You can use TypeScript inside your
<script> blocks — just add the lang="ts" attribute- Components with TypeScript can be type-checked with the
svelte-check command- You get autocompletion hints and type-checking as you're writing components, even in expressions inside markup
- TypeScript files understand the Svelte component API — no more red squiggles when you import a
.svelte file into a .ts modulehttps://svelte.dev/blog/svelte-and-typescript
cdk8s is a software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. cdk8s generates pure Kubernetes YAML - you can use cdk8s to define applications for any Kubernetes cluster running anywhere.
cdk8s apps are programs written in one of the supported programming languages. They are structured as a tree of constructs.
The root of the tree is an App construct. Within an app, users define any number of charts (classes that extend the Chart class). Each chart is synthesized into a separate Kubernetes manifest file. Charts are, in turn, composed of any number of constructs, and eventually from resources, which represent any Kubernetes resource, such as Pod, Service, Deployment, ReplicaSet, etc.
cdk8s apps only define Kubernetes applications, they don't actually apply them to the cluster. When an app is executed, it synthesizes all the charts defined within the app into the dist directory, and then those charts can be applied to any Kubernetes cluster using kubectl apply -f dist/chart.k8s.yaml or a GitOps tool like Flux.
https://github.com/awslabs/cdk8s
#ts #python #devops
cdk8s apps are programs written in one of the supported programming languages. They are structured as a tree of constructs.
The root of the tree is an App construct. Within an app, users define any number of charts (classes that extend the Chart class). Each chart is synthesized into a separate Kubernetes manifest file. Charts are, in turn, composed of any number of constructs, and eventually from resources, which represent any Kubernetes resource, such as Pod, Service, Deployment, ReplicaSet, etc.
cdk8s apps only define Kubernetes applications, they don't actually apply them to the cluster. When an app is executed, it synthesizes all the charts defined within the app into the dist directory, and then those charts can be applied to any Kubernetes cluster using kubectl apply -f dist/chart.k8s.yaml or a GitOps tool like Flux.
https://github.com/awslabs/cdk8s
#ts #python #devops
The Machine Learning Toolkit for Kubernetes
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.
Features:
- Kubeflow includes services to create and manage interactive Jupyter notebooks.
- Kubeflow provides a custom TensorFlow training job operator that you can use to train your ML model.
- Kubeflow supports a TensorFlow Serving container to export trained TensorFlow models to Kubernetes.
- Kubeflow Pipelines is a comprehensive solution for deploying and managing end-to-end ML workflows.
- Our development plans extend beyond TensorFlow. We're working hard to extend the support of PyTorch, Apache MXNet, MPI, XGBoost, Chainer, and more. We also integrate with Istio and Ambassador for ingress, Nuclio as a fast multi-purpose serverless framework, and Pachyderm for managing your data science pipelines.
https://www.kubeflow.org/
#devops #ds
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. Our goal is not to recreate other services, but to provide a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Anywhere you are running Kubernetes, you should be able to run Kubeflow.
Features:
- Kubeflow includes services to create and manage interactive Jupyter notebooks.
- Kubeflow provides a custom TensorFlow training job operator that you can use to train your ML model.
- Kubeflow supports a TensorFlow Serving container to export trained TensorFlow models to Kubernetes.
- Kubeflow Pipelines is a comprehensive solution for deploying and managing end-to-end ML workflows.
- Our development plans extend beyond TensorFlow. We're working hard to extend the support of PyTorch, Apache MXNet, MPI, XGBoost, Chainer, and more. We also integrate with Istio and Ambassador for ingress, Nuclio as a fast multi-purpose serverless framework, and Pachyderm for managing your data science pipelines.
https://www.kubeflow.org/
#devops #ds
YouTube
Introduction to Kubeflow
In this first episode of Kubeflow 101, we give an overview of Kubeflow → https://goo.gle/394UQu6
Kubeflow is an open-source project containing a curated set of compatible tools and frameworks specific for ML.
Learn more about the three principles that…
Kubeflow is an open-source project containing a curated set of compatible tools and frameworks specific for ML.
Learn more about the three principles that…
A #rust crate to offer compile-time assistance for working with unsafe code.
Sometimes functions or methods have preconditions that cannot be ensured in the type system and cannot be guarded against at runtime. The most prominent example of functions like that are unsafe functions. When used correctly, unsafe functions are used to "declare the existence of contracts the compiler can't check". These contracts are the preconditions for the function call. Failing to uphold them usually results in a violation of memory safety and undefined behavior.
Currently the most used scheme for dealing with these preconditions on unsafe functions is to mention them in the Safety section of the function's documentation. Programmers using the function then have to check what they have to ensure to call the function correctly. The programmer that uses the function may then leave a comment next to the function, describing why the call is safe (why the preconditions hold).
This approach is even advertised by the compiler (as of 1.44.1) when using an unsafe function outside of an unsafe block.
This library works by allowing programmers to specify preconditions on functions they write in a unified format. Those preconditions are then transformed into an additional function argument. Callers of the function then specify the same preconditions at the call site, along with a reason why they believe the precondition is upheld. If the preconditions don't match or are not specified, the function will have invalid arguments and the code will not compile.
https://github.com/aticu/pre
Sometimes functions or methods have preconditions that cannot be ensured in the type system and cannot be guarded against at runtime. The most prominent example of functions like that are unsafe functions. When used correctly, unsafe functions are used to "declare the existence of contracts the compiler can't check". These contracts are the preconditions for the function call. Failing to uphold them usually results in a violation of memory safety and undefined behavior.
Currently the most used scheme for dealing with these preconditions on unsafe functions is to mention them in the Safety section of the function's documentation. Programmers using the function then have to check what they have to ensure to call the function correctly. The programmer that uses the function may then leave a comment next to the function, describing why the call is safe (why the preconditions hold).
This approach is even advertised by the compiler (as of 1.44.1) when using an unsafe function outside of an unsafe block.
This library works by allowing programmers to specify preconditions on functions they write in a unified format. Those preconditions are then transformed into an additional function argument. Callers of the function then specify the same preconditions at the call site, along with a reason why they believe the precondition is upheld. If the preconditions don't match or are not specified, the function will have invalid arguments and the code will not compile.
https://github.com/aticu/pre
Introducing Domain-Oriented Microservice Architecture
Recently there has been substantial discussion around the downsides of service oriented architectures and microservice architectures in particular. While only a few years ago, many people readily adopted microservice architectures due to the numerous benefits they provide such as flexibility in the form of independent deployments, clear ownership, improvements in system stability, and better separation of concerns, in recent years people have begun to decry microservices for their tendency to greatly increase complexity, sometimes making even trivial features difficult to build.
As Uber has grown to around 2,200 critical microservices, we experienced these tradeoffs first hand. Over the last two years, Uber has attempted to reduce microservice complexity while still maintaining the benefits of a microservice architecture. With this blog post we hope to introduce our generalized approach to microservice architectures, which we refer to as “Domain-Oriented Microservice Architecture” (DOMA).
While it’s been popular in recent years to criticize microservice architectures because of these downsides, few people have advocated an outright rejection of microservice architectures. The operational benefits are too important, and it seems that there are no, or limited, alternatives. Our goal with DOMA is to provide a way forward for organizations that want to reduce overall system complexity while maintaining the flexibility associated with microservice architectures.
This piece explains DOMA, the concerns that led to the adoption of this architecture for Uber, its benefits for platform and product teams, and, finally, some advice for teams who want to adopt this architecture.
#architecture #ddd
Recently there has been substantial discussion around the downsides of service oriented architectures and microservice architectures in particular. While only a few years ago, many people readily adopted microservice architectures due to the numerous benefits they provide such as flexibility in the form of independent deployments, clear ownership, improvements in system stability, and better separation of concerns, in recent years people have begun to decry microservices for their tendency to greatly increase complexity, sometimes making even trivial features difficult to build.
As Uber has grown to around 2,200 critical microservices, we experienced these tradeoffs first hand. Over the last two years, Uber has attempted to reduce microservice complexity while still maintaining the benefits of a microservice architecture. With this blog post we hope to introduce our generalized approach to microservice architectures, which we refer to as “Domain-Oriented Microservice Architecture” (DOMA).
While it’s been popular in recent years to criticize microservice architectures because of these downsides, few people have advocated an outright rejection of microservice architectures. The operational benefits are too important, and it seems that there are no, or limited, alternatives. Our goal with DOMA is to provide a way forward for organizations that want to reduce overall system complexity while maintaining the flexibility associated with microservice architectures.
This piece explains DOMA, the concerns that led to the adoption of this architecture for Uber, its benefits for platform and product teams, and, finally, some advice for teams who want to adopt this architecture.
#architecture #ddd
Zero configuration web framework written in #js.
Zero abstracts the usual project configuration for routing, bundling, and transpiling to make it easier to get started.
It allows you to build your application without worrying about package management or routing. Write your code in a mix of Node.js, React, HTML, MDX, Vue, Svelte, Python, and static files and put them all in a folder.
Features:
- Auto Configuration: Your project folder doesn't require config files. You just place your code and it's automatically compiled, bundled and served.
- File-system Based Routing: If your code resides in ./api/login.js it's exposed at
- Auto Dependency Resolution: If a file does
- Multiple Languages: Zero is designed to support code written in many languages all under a single project. Imagine this: Exposing your Tensorflow model as a python API, Using React pages to consume it, Writing the user login code in Node.js, Your landing pages in a mix of HTML or Markdown/MDX.
https://zeroserver.io/
Zero abstracts the usual project configuration for routing, bundling, and transpiling to make it easier to get started.
It allows you to build your application without worrying about package management or routing. Write your code in a mix of Node.js, React, HTML, MDX, Vue, Svelte, Python, and static files and put them all in a folder.
Features:
- Auto Configuration: Your project folder doesn't require config files. You just place your code and it's automatically compiled, bundled and served.
- File-system Based Routing: If your code resides in ./api/login.js it's exposed at
http://<SERVER>/api/login. Inspired by good ol' PHP days.- Auto Dependency Resolution: If a file does
require('underscore'), it is automatically installed and resolved. You can always create your own package.json file to install a specific version of a package.- Multiple Languages: Zero is designed to support code written in many languages all under a single project. Imagine this: Exposing your Tensorflow model as a python API, Using React pages to consume it, Writing the user login code in Node.js, Your landing pages in a mix of HTML or Markdown/MDX.
https://zeroserver.io/
Onivim 2 is a reimagination of the Oni editor. Onivim 2 aims to bring the speed of Sublime, the language integration of #vscode, and the modal editing experience of #vim together, in a single package. Written in #reason
Onivim 2 is built in reason using the revery framework.
Onivim 2 uses libvim to manage buffers and provide authentic modal editing, and features a fast, native front-end. In addition, Onivim 2 leverages the VSCode Extension Host process in its entirety - meaning, eventually, complete support for VSCode extensions and configuration.
Goals:
- Modern UX - an experience on par with modern code editors like VSCode and Atom
- VSCode Plugin Support - use all of the features of VSCode plugins, including language servers and debuggers
- Cross-Platform - works on Windows, OSX, and Linux
- Batteries Included - works out of the box
- Performance - no compromises: native performance, minimal input latency
- Easy to Learn - Onivim 2 should be comfortable for non-vimmers, too!
The goal of this project is to build an editor that doesn't exist today - the speed of a native code editor like Sublime, the power of modal editing, and the rich tooling that comes with a lightweight editor like VSCode.
https://github.com/onivim/oni2
Onivim 2 is built in reason using the revery framework.
Onivim 2 uses libvim to manage buffers and provide authentic modal editing, and features a fast, native front-end. In addition, Onivim 2 leverages the VSCode Extension Host process in its entirety - meaning, eventually, complete support for VSCode extensions and configuration.
Goals:
- Modern UX - an experience on par with modern code editors like VSCode and Atom
- VSCode Plugin Support - use all of the features of VSCode plugins, including language servers and debuggers
- Cross-Platform - works on Windows, OSX, and Linux
- Batteries Included - works out of the box
- Performance - no compromises: native performance, minimal input latency
- Easy to Learn - Onivim 2 should be comfortable for non-vimmers, too!
The goal of this project is to build an editor that doesn't exist today - the speed of a native code editor like Sublime, the power of modal editing, and the rich tooling that comes with a lightweight editor like VSCode.
https://github.com/onivim/oni2
Generates LaTeX math description from #python functions.
Personal opinion: that's the easiest way to write LaTeX I know!
(For some reason, I cannot add images, telegram seems buggy, so here's the original tweet: https://twitter.com/deliprao/status/1287283718353072129)
https://github.com/odashi/latexify_py
Personal opinion: that's the easiest way to write LaTeX I know!
(For some reason, I cannot add images, telegram seems buggy, so here's the original tweet: https://twitter.com/deliprao/status/1287283718353072129)
https://github.com/odashi/latexify_py
Twitter
Delip Rao
Going from Python ASTs to LaTex! https://t.co/Fh7adoVAMj
Command-line viewer for rustdoc documentation.
Has native #vim intergration.
https://lib.rs/crates/rusty-man
#rust
Has native #vim intergration.
https://lib.rs/crates/rusty-man
#rust
dijo is a habit tracker. It is curses-based, it runs in your terminal.
dijo is scriptable, hook it up with external programs to track events without moving a finger. dijo is modal, much like a certain text editor.
Features:
- written in #rust
- vim like motions: navigate dijo with hjkl!
- dijo is modal: different modes to view different stats!
- vim like command mode: add with
- fully scriptable: configure dijo to track your git commits!
https://github.com/NerdyPepper/dijo
dijo is scriptable, hook it up with external programs to track events without moving a finger. dijo is modal, much like a certain text editor.
Features:
- written in #rust
- vim like motions: navigate dijo with hjkl!
- dijo is modal: different modes to view different stats!
- vim like command mode: add with
:add, delete with :delete and above all, quit with :q!.- fully scriptable: configure dijo to track your git commits!
https://github.com/NerdyPepper/dijo
Inquest lets you add log statements to python without restarting your #python instance. It helps you quickly uncover what is going wrong.
Inquest takes extremely low overhead: the part that's a python library is completely idle unless there is something to log. Inquest is specifically designed to enable you to quickly introspect into Python even in production environments.
Inquest works by bytecode injection. The library sets up a connection with the backend. When you add a new log statment on the dashboard, the backend relays that change the connected python instance. Inside python, inquest finds the affected functions inside the VM.
Then it uses the python interpreter to recompile a newly generated piece of python bytecode with the new log statements inserted. Then inquest pointer-swaps the new bytecode with the old bytecode.
https://github.com/yiblet/inquest
Here's a gif of the magic. I'm running a single python instance in the background and I use Inquest to dynamically add log statements to the running code:
Inquest takes extremely low overhead: the part that's a python library is completely idle unless there is something to log. Inquest is specifically designed to enable you to quickly introspect into Python even in production environments.
Inquest works by bytecode injection. The library sets up a connection with the backend. When you add a new log statment on the dashboard, the backend relays that change the connected python instance. Inside python, inquest finds the affected functions inside the VM.
Then it uses the python interpreter to recompile a newly generated piece of python bytecode with the new log statements inserted. Then inquest pointer-swaps the new bytecode with the old bytecode.
https://github.com/yiblet/inquest
Here's a gif of the magic. I'm running a single python instance in the background and I use Inquest to dynamically add log statements to the running code:
Faster Nmap Scanning with #rust
Turns a 17 minutes Nmap scan into 19 seconds.
Find all open ports fast with RustScan, automatically pipe them into Nmap.
Features:
- Scans all 65k ports in 8 seconds (on 10k batch size).
- Saves you time by automatically piping it into Nmap. No more manual copying and pasting!
- Does one thing and does it well. Only purpose is to improve Nmap, not replace it!
- Let's you choose what Nmap commands to run, or uses the default.
https://github.com/RustScan/RustScan
Turns a 17 minutes Nmap scan into 19 seconds.
Find all open ports fast with RustScan, automatically pipe them into Nmap.
Features:
- Scans all 65k ports in 8 seconds (on 10k batch size).
- Saves you time by automatically piping it into Nmap. No more manual copying and pasting!
- Does one thing and does it well. Only purpose is to improve Nmap, not replace it!
- Let's you choose what Nmap commands to run, or uses the default.
https://github.com/RustScan/RustScan
Out-of-Core DataFrames for #python, ML, visualize and explore big tabular data at a billion rows per second.
Vaex is a high performance Python library for lazy Out-of-Core DataFrames (similar to Pandas), to visualize and explore big tabular datasets. It calculates statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid for more than a billion (10^9) samples/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
Key features:
- Instant opening of Huge data files (memory mapping)
- Expression system: don't waste memory or time with feature engineering, we (lazily) transform your data when needed
- Out-of-core DataFrame: filtering and evaluating expressions will not waste memory by making copies; the data is kept untouched on disk, and will be streamed only when needed
- Fast groupby / aggregations
- Fast and efficient join
https://github.com/vaexio/vaex
Vaex is a high performance Python library for lazy Out-of-Core DataFrames (similar to Pandas), to visualize and explore big tabular datasets. It calculates statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid for more than a billion (10^9) samples/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
Key features:
- Instant opening of Huge data files (memory mapping)
- Expression system: don't waste memory or time with feature engineering, we (lazily) transform your data when needed
- Out-of-core DataFrame: filtering and evaluating expressions will not waste memory by making copies; the data is kept untouched on disk, and will be streamed only when needed
- Fast groupby / aggregations
- Fast and efficient join
https://github.com/vaexio/vaex
Experimental #html templates linting for Jinja, Nunjucks, Django templates, Twig, Liquid. Forked from jinjalint.
Curlylint is an HTML linter for “curly braces” templates, and their HTML. It focuses on rules to check for common accessibility issues.
https://github.com/thibaudcolas/curlylint
#python
Curlylint is an HTML linter for “curly braces” templates, and their HTML. It focuses on rules to check for common accessibility issues.
https://github.com/thibaudcolas/curlylint
#python
⚡Breaking news!
A lightweight, distinctly #scala take on functional abstractions, with tight ZIO integration.
ZIO Prelude is a radically new approach to functional abstractions in Scala, which throws out the classic functor hierarchy in favor of a modular algebraic approach that is smaller, easier to understand and teach, and more expressive.
ZIO Prelude is an alternative to libraries like Scalaz and Cats, which imported the Haskell type class hierarchy into Scala without making significant changes.
ZIO Prelude has three key areas of focus:
- Data structures, and type classes that for traversing them. ZIO Prelude embraces the collections in the Scala standard library, and extends them with new instances and new useful additions.
- Patterns of composition for types. ZIO Prelude provides a small catalog of patterns for binary operators, which combine two values into another value of the same type. These patterns are named after the algebraic laws they satisfy: associativity, commutativity, and identity.
- Patterns of composition for type constructors. ZIO Prelude provides a catalog of patterns for binary operators on type constructors (things like Future, Option, ZIO Task). These patterns are named after the algebraic laws they satisfy (associativity, commutativity, and identity) and the structure they produce, whether a tuple or an either.
The library has a small research-stage package (zio.prelude.fx) that provides abstraction over expressive effect types like ZIO and ZPure.
https://github.com/zio/zio-prelude
A lightweight, distinctly #scala take on functional abstractions, with tight ZIO integration.
ZIO Prelude is a radically new approach to functional abstractions in Scala, which throws out the classic functor hierarchy in favor of a modular algebraic approach that is smaller, easier to understand and teach, and more expressive.
ZIO Prelude is an alternative to libraries like Scalaz and Cats, which imported the Haskell type class hierarchy into Scala without making significant changes.
ZIO Prelude has three key areas of focus:
- Data structures, and type classes that for traversing them. ZIO Prelude embraces the collections in the Scala standard library, and extends them with new instances and new useful additions.
- Patterns of composition for types. ZIO Prelude provides a small catalog of patterns for binary operators, which combine two values into another value of the same type. These patterns are named after the algebraic laws they satisfy: associativity, commutativity, and identity.
- Patterns of composition for type constructors. ZIO Prelude provides a catalog of patterns for binary operators on type constructors (things like Future, Option, ZIO Task). These patterns are named after the algebraic laws they satisfy (associativity, commutativity, and identity) and the structure they produce, whether a tuple or an either.
The library has a small research-stage package (zio.prelude.fx) that provides abstraction over expressive effect types like ZIO and ZPure.
https://github.com/zio/zio-prelude
GitHub
GitHub - zio/zio-prelude: A lightweight, distinctly Scala take on functional abstractions, with tight ZIO integration
A lightweight, distinctly Scala take on functional abstractions, with tight ZIO integration - GitHub - zio/zio-prelude: A lightweight, distinctly Scala take on functional abstractions, with tight Z...
#sql Style Guide
You can use this set of guidelines, fork them or make your own - the key here is that you pick a style and stick to it. To suggest changes or fix bugs please open an issue or pull request on GitHub.
These guidelines are designed to be compatible with Joe Celko's SQL Programming Style book to make adoption for teams who have already read that book easier. This guide is a little more opinionated in some areas and in others a little more relaxed. It is certainly more succinct where Celko's book contains anecdotes and reasoning behind each rule as thoughtful prose.
https://www.sqlstyle.guide/
You can use this set of guidelines, fork them or make your own - the key here is that you pick a style and stick to it. To suggest changes or fix bugs please open an issue or pull request on GitHub.
These guidelines are designed to be compatible with Joe Celko's SQL Programming Style book to make adoption for teams who have already read that book easier. This guide is a little more opinionated in some areas and in others a little more relaxed. It is certainly more succinct where Celko's book contains anecdotes and reasoning behind each rule as thoughtful prose.
https://www.sqlstyle.guide/
www.sqlstyle.guide
SQL style guide by Simon Holywell
A consistent code style guide for SQL to ensure legible and maintainable projects
Effekt Language: a research language with effect handlers and lightweight effect polymorphism written in #scala
-Lightweight Effect Polymorphism, No need to understand effect polymorphic functions or annotate them. Explicit effect polymorphism simply does not exist.
-Effect Safety, A type- and effect system that does not get into your way. Rely on a simple, yet powerful effect system that guarantees all effects to be handled.
- Effect Handlers, (Algebraic) effect handlers let you define advanced control-flow structures like generators as user libraries. Those libraries can be seamlessly composed.
https://effekt-lang.github.io/effekt-website/
-Lightweight Effect Polymorphism, No need to understand effect polymorphic functions or annotate them. Explicit effect polymorphism simply does not exist.
-Effect Safety, A type- and effect system that does not get into your way. Rely on a simple, yet powerful effect system that guarantees all effects to be handled.
- Effect Handlers, (Algebraic) effect handlers let you define advanced control-flow structures like generators as user libraries. Those libraries can be seamlessly composed.
https://effekt-lang.github.io/effekt-website/
Run your GitHub Actions locally!
Why would you want to do this? Two reasons:
- Fast Feedback - Rather than having to commit/push every time you want to test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use act to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides.
- Local Task Runner - I love make. However, I also hate repeating myself. With act, you can use the GitHub Actions defined in your .github/workflows/ to replace your Makefile!
When you run act it reads in your GitHub Actions from
https://github.com/nektos/act
#go #devops
Why would you want to do this? Two reasons:
- Fast Feedback - Rather than having to commit/push every time you want to test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use act to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides.
- Local Task Runner - I love make. However, I also hate repeating myself. With act, you can use the GitHub Actions defined in your .github/workflows/ to replace your Makefile!
When you run act it reads in your GitHub Actions from
.github/workflows/ and determines the set of actions that need to be run. It uses the Docker API to either pull or build the necessary images, as defined in your workflow files and finally determines the execution path based on the dependencies that were defined. Once it has the execution path, it then uses the Docker API to run containers for each action based on the images prepared earlier. The environment variables and filesystem are all configured to match what GitHub provides.https://github.com/nektos/act
#go #devops
Rector - Upgrade Your Legacy #php App to a Modern Codebase
Rector is a reconstructor tool - it does instant upgrades and instant refactoring of your code. Why refactor manually if Rector can handle 80% of the task for you?
What Can Rector Do for You?
- Upgrade 30000 unit tests from PHPUnit 6 to 9 in 2 weeks
- Complete
- Complete PHP 7.4 property type declarations
- Upgrade your code from PHP 5.3 to 8.0
- Migrate your project from Nette to Symfony
- Refactor Laravel facades to dependency injection
- And much more...
https://github.com/rectorphp/rector
Rector is a reconstructor tool - it does instant upgrades and instant refactoring of your code. Why refactor manually if Rector can handle 80% of the task for you?
What Can Rector Do for You?
- Upgrade 30000 unit tests from PHPUnit 6 to 9 in 2 weeks
- Complete
@var annotations or parameter/return type declarations- Complete PHP 7.4 property type declarations
- Upgrade your code from PHP 5.3 to 8.0
- Migrate your project from Nette to Symfony
- Refactor Laravel facades to dependency injection
- And much more...
https://github.com/rectorphp/rector