
O post Mentirinhas #2064 apareceu primeiro em Mentirinhas.
Fábio Coala 29/09/2023 | Source: Mentirinhas
O post Mentirinhas #2064 apareceu primeiro em Mentirinhas.
Fábio Coala 28/09/2023 | Source: Mentirinhas
O post Mentirinhas #2063 apareceu primeiro em Mentirinhas.
Luke Plant 27/09/2023 | Source: Luke Plant's home page
If you are using static HTML files for your docs, such as with Sphinx or many other doc generators, here is a chunk of code that will speed up loading of pages after the first one. If you’re using some other docs generator, the instructions will probably work with minimal adaptation.
Create a custom.js
file inside your _static
directory, with the following contents:
var script = document.createElement('script'); script.src = "https://unpkg.com/[email protected]" script.integrity = "sha384-xcuj3WpfgjlKF+FXhSQFQ0ZNr39ln+hwjN3npfM9VBnUskLolQAcN80McRIVOPuO"; script.crossOrigin = 'anonymous'; script.onload = function() { var body = document.querySelector("body"); body.setAttribute('hx-boost', "true"); htmx.process(body); } document.head.appendChild(script);
Add an item to your html_js_files
setting in your Sphinx conf.py
:
Rebuild and you’re done.
What this script does is:
Load the htmx library.
If it successfully loads, adds the hx-boost attribute to the body element.
Initialises htmx on the page.
This means that htmx will intercept all internal links on the page, and instead of letting the browser load them the normal way, it sends an AJAX request and swaps in the content of the page. This means that the whole page doesn’t need to be reloaded by the browser, saving precious milliseconds.
I will provide reasons why you really shouldn’t use the code above, although it works almost perfectly. But first, a rant.
This post was inspired by Mux’s blog post on migrating 50,000 lines of React Server Components. It contains a nice overview of the history of web site architecture, including this quote:
Then, we started wondering: What if we wanted faster responses and more interactivity? Every time a user takes an action, do we really want to send cookies back to the server and make the server generate a whole new page? What if we made the client do that work instead? We can just send all the rendering code to the client as JavaScript!
This was called client-side rendering (CSR) or single-page applications (SPA) and was widely considered a bad move
However, instead of then suggesting that we perhaps we should retrace our steps, the article just plunges on and on, deeper and deeper into the jungle.
Now, this might all make sense if we are talking about a highly interactive site that has the highest possible needs in terms of user interactivity. But I realised the article was about just their documentation site, not the main application.
Now, some docs sites are really fancy and do very clever interactive things. Mux’s, however, is not like that. The only interactive things I could find were:
tabs – like you can get with something like sphinx-code-tabs, powered by a tiny bit of Javascript.
their changelog page – which is more complicated, but whose essential functionality could again be implemented by a really small amount of Javascript added to a static page. I should also note that their page is really pretty slugish when you change the filters, much slower than you would get by an approach that just selectively hides parts of the page using DOM manipulation.
search. Search is definitely important, but I can’t see why it means the whole site needs to be implemented in React.
A “Was this helpful” component – this could have been a small web component or something similar.
A few fancy transitions in the side bar.
These are not the highly stateful pages that React was designed for. Maybe there are a few other things I didn’t find, but 95% of it could be handled using entirely static HTML, built by any number of simple docs generators, with tiny amounts of Javascript.
The only other thing I noticed is that page transitions generally had that instant feel an SPA can give you, and were noticeably faster than you would get with the static HTML solution I’m suggesting.
So, not to be beaten, I came up with the above solution on htmx so I could match the speed.
Now, here’s why you shouldn’t use it:
A typical docs page with Sphinx loads in a few hundred milliseconds, which is fine. Do you really need to shave that down to less than 50 so it feels “instant”? Do your users care?
While it is truly a tiny fraction of the complexity of the React docs site Mux described in their post, you are still adding some significant complexity. Is it worth is?
Are you sure it doesn’t break anything? Are you sure it’s not going to interact badly with some Javascript on some page, maybe some future Javascript you will add?
Have you considered all use cases – like the person who downloads your whole docs site using wget --recursive
so they can browse offline? Answer: if they have no internet connection when they view the docs, it will actually work fine, because the htmx library won’t load at all. But if they are online, the htmx library will load, and then every internal link will break due to CORS errors. You just broke offline viewing. You could fix this very easily with an extra conditional in the script above, but I’m making a point. Is there anything else that’s broken?
No prizes for guessing that while Sphinx-generated sites normally work perfectly with wget --recursive
for offline viewing, docs.mux.com does not work well, to put it mildly. I also wasted hundreds of Mb finding out, due to the vast amount of boilerplate every single HTML file has. Don’t be like them.
This is what you should actually do:
recognise that you know exactly how to make your documentation pages load instantly, like an SPA, and could absolutely do it if you wanted to, still with a tiny fraction of the complexity of an actual SPA architecture, and with fixes for the issues I’ve mentioned, in about 15 minutes, then,
don’t.
As protection against the FOMO and fashion that drives so much of web development, this attitude needs a catchy slogan, which is the kind of thing I’m not very good at. But as a first attempt, how about: SNOB driven development. SNOB means “Smugly kNOwing Better”. Or maybe that could be “Smugly NO-ing Better”.
Join me. Be an arrogant SNOB and just say No.
Fábio Coala 27/09/2023 | Source: Mentirinhas
O post Mentirinhas #2062 apareceu primeiro em Mentirinhas.
Anonymous 26/09/2023 | Source: Yoshua Wuyts — Blog
This post is part of the Async Iteration series:
Async Functions in Traits (AFIT) are in the process of being stabilized and I figured it would be a good time to look more closely at the properties they provide. In this post I want to compare AFIT-based traits with poll-based traits, using the "async iterator" trait as the driving example. But most everything which applies to async iterator, will also apply to other traits such as async read and async write.
In this post I will make the case that the best direction for the stdlib is to base its async traits on AFITs. The intended audience for this post is primarily my fellow members of WG-Async, as well as members of T-Lang and T-Libs. To read a summary of the findings jump ahead to the conclusion. This post assumes readers are familiar with the inner workings of Rust's async systems, as well as a familiarity of the tradeoffs being discussed.
To provide some flavor to what I'm talking about, in this post we'll be
discussing the "async iterator" trait, asking the question whether we should
base it on fn poll_next
or async fn next
. Here are both variants
side-by-side:
// Using `fn poll_next`.
trait AsyncIterator {
type Item;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>>;
}
// Using `async fn next`.
trait AsyncIterator {
type Item;
async fn next(&mut self) -> Option<Self::Item>;
}
I expect pretty much everyone will agree that on a first look the async fn next
-based trait seems easier to use. Rather than needing to think about what
Pin
is, or how Poll
works, we can just write our async functions the way we
usually do, and it will just work. Pretty neat!
But that's just on the surface. Does that still hold if we look more closely?
Concerns have been raised about the performance of async fn next
, claiming not
only would it perform less well. It's also alleged that async fn next
does not
provide essential features, even going so far to claim that fn poll_next
is
fundamentally lower-level and thus the only reasonable choice for a systems
programming language. In the remainder of this post we'll be going over those
claims, and show why upon closer examination they do not appear to hold.
Let's start with the most obvious one: performance. At its core Rust is a
systems programming language, and in order to properly cater to its niche it
tends to only provide abstractions which have comparable performance to their
hand-rolled versions. The claim is that poll_next
should provide better
performance than async fn next
since we're compiling it by hand. But when
actually measured, the two approaches appear to compile to identical assembly in
various configurations - meaning they will have identical performance.
But don't just take my word for it, we can use examples to substantiate this.
Let's create a simple "once" future which holds some data, and when polled it
will return that data. Rather than using complex async/.await machinery, we'll
be creating a new function poll_once
which constructs a dummy waker in-line
and can be used to poll a future exactly once:
pub fn call_once() -> Poll<Option<usize>> {
let mut iter = once(12usize);
poll_once(iter.next())
}
Let's start by evaluating what this looks like when implemented using fn poll_next
. We could write this as follows:
struct Once<T>(Option<T>);
impl<T> AsyncIterator for Once<T> {
type Item = T;
fn poll_next(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
// SAFETY: we're projecting into an unpinned field
let this = unsafe { Pin::into_inner_unchecked(self) };
Poll::Ready((&mut this.value).take())
}
}
When polled we project Self
into its fields, which is just the Option
type.
We then call .take
to extract the value, or panic if there is none. This
should be fairly straight forward. If poll_once
creates a non-atomic dummy
waker (this is just the first example), the compiler will compile this code down
to the following x86 assembly (compiler explorer):
example::call_once:
mov eax, 1
mov edx, 12
ret
This assembly basically means: "Hey I've got the constant '12' over here -
please move it into the return registry and then exit the function". That's
about the smallest this function can be without being inlined. Now let's see
what happens if we implement this code using async fn next
. Instead of fn poll_next
we can use an async function directly:
pub struct Once<T>(Option<T>);
impl<T> AsyncIterator for Once<T> {
type Item = T;
async fn next(&mut self) -> Option<T> {
self.0.take()
}
}
It's nice we don't have to perform pin projections anymore (more on that later). But what's the performance like? Well, if this was slower we'd expect it to generate more assembly. So let's take a look (compiler explorer):
example::call_once:
mov eax, 1
mov edx, 12
ret
The assembly is identical! Why is that? Well, for one: the Rust compiler is
pretty good at generating fast code. But we're also in a bit of a simplified
environment. So far in our examples we've not using "real" thread-safe wakers,
instead basing our wakers on Rc
. What happens if we switch to Arc
-based
wakers? Here's the link to a compiler
explorer comparing the two. It now generates a
lot more assembly than before (yay atomics), but luckily we can use diff(1)
to
compare the output:
example::call_once:
; 21 lines of assembly + calls to another 118 lines
yosh@MacBook-Pro scratch % pbpaste > one.rs
yosh@MacBook-Pro scratch % pbpaste > two.rs
yosh@MacBook-Pro scratch % diff one.rs two.rs
yosh@MacBook-Pro scratch %
The diff
output is empty, meaning there are no differences even if we Arc
s
to correctly construct our wakers, it just generates a lot more code. But okay
fine, maybe there are more differences? After all: fn poll_next
has access to
the Waker
and can return Poll
, meaning it has low-level control over the
future state machine while async fn next
does not. What happens if we want to
provide low-level control over the future state machine from async fn next
?
Luckily we've stabilized a simple mechanism for this already:
std::future::poll_fn
.
This function provides the ability to access the low-level internals of any
future, including AFITs. Let's lower our example to make use of this, shall we?
pub struct Once<T>(Option<T>);
impl<T> AsyncIterator for Once<T> {
type Item = T;
async fn next(&mut self) -> Option<T> {
future::poll_fn(|_cx| /* -> Poll<Option<T>> */ {
// We have access to `cx` here which contains the `Waker`.
Poll::Ready((&mut self.value).take())
}).await
}
}
This seems simple enough: whenever we want to do anything low-level inside of an
async fn
, we can use poll_fn
to drop into the future state machine. This
should work not just for the async version of the iterator trait, but for all
async traits. There is more to be said about how this interacts with pinning and
self-referential types, but we'll cover that in more detail later on in the
post. To close this out though: what does this compile
to if we call it using "real" wakers? (compiler explorer):
example::call_once:
; 21 lines of assembly + calls to another 118 lines
yosh@MacBook-Pro scratch % pbpaste > two.rs
yosh@MacBook-Pro scratch % diff one.rs two.rs
yosh@MacBook-Pro scratch %
That's right: the output remains the same. This gives us a pretty good clue
about what is happening here. Inside the compiler async fn next
is desugared
to a future, just like fn poll_next
is. And because of basic inlining and
const-folding optimizations, the resulting state machines are identical - which
means that the resulting assembly is identical too. This is exactly how
zero-cost abstractions are supposed to work, and is the entire premise of Rust's
async system. If we ever find
a case where the optimizer doesn't perform those basic optimizations we can then
treat that as a bug in the compiler - not a limitation of the design.
When people say that "async iterator is not the async version of iterator" they are correct. Well, sort of. If we look at existing implementations that is true: it doesn't quite work like the async version of iterator. Instead what it really is is the async version of "pinned iterator". Which is not a trait we currently have, but there certainly is a case to be made for it. Instead it's better to ask whether async iterator should be the "async version of iterator" - and I certainly believe it should be 1.
Incidentally that has also been the framing of the trait WG-async has been communicating to T-lang and T-libs, who have signed off on it. I'm not suggesting that this decision should bind us (I don't like to rules lawyer). What I'm instead trying to show with this is that this has been an accepted framing of what the design should achieve for years now, and we've already rejected the framing that "async iterator" (or "stream") should be its own special thing. That certainly can be changed again, but it is not a novel insight by any stretch.
Let me explain what I mean by this using examples. In Rust the base iterator API
has an associated type Item
, a function next
which takes a mutable reference
to self
, and returns an Option<Self::Item>
:
trait Iterator {
type Item;
fn next(&mut self) -> Option<Self::Item>;
}
If we did a direct translation to async Rust, we'd have an API which instead of
exposing an fn next
exposed an async fn next
. The only real difference here
is the addition of the async
keyword:
trait AsyncIterator {
type Item;
async fn next(&mut self) -> Option<Self::Item>;
}
However, when we look at the ecosystem Stream
trait,
or the currently unstable AsyncIterator
API they are
not implemented in terms of async fn next
. Instead they provide an fn poll_next
which takes both a pinned reference to self
, a mutable reference
to the waker context, and wrap the return type in Poll
:
trait AsyncIterator {
type Item;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>>;
}
In the previous section we've already discussed how you can get access to the
waker context from inside an async function by using poll_fn
. So we can pretty
much ignore the waker context and the Poll
in the return type. That leaves the
change in the self
type. Our async fn next
takes &mut self
, while this
variant takes Pin<&mut Self>
. This isn't necessary for the core functionality
of async iterator, since it is pinning more than needed. So simply put, what
we've just written is in fact the async version of this trait:
trait PinnedIterator {
type Item;
fn next(self: Pin<&mut Self>) -> Option<Self::Item>;
}
Here we have a non-async version of iterator which takes self as a pinned
reference. This is useful if you ever need to write an iterator which can
operate on self-referential structs. For example: if we ever start thinking of
stabilizing generator functions, we want them to be able to hold references
across yield
points. That will require self-referential iterators.
An important insight of this is that the question of whether iterator should be pinned is orthogonal to whether it is async. Which is illustrated by the fact that we can reformulate a "pinned asynchronous iterator" just fine using AFITs:
trait PinnedAsyncIterator {
type Item;
async fn next(self: Pin<&mut Self>) -> Option<Self::Item>;
}
This can be combined with the poll_fn
function as we showed in the previous
section to recreate the low-level semantics of fn poll_next
, providing access
to both a pinned self-type and the future's waker argument. To put it plainly:
"is async" and "is pinned" are orthogonal features, and fn poll_next
needlessly combines both.
People occasionally ask me about the Unpin
trait when I talk about async
versions of traits. For example if you compare
Iterator::next
and
futures::stream::StreamExt::next
,
you will see that the latter has an extra where Self: Unpin
bound.
// `Iterator::next`
fn next(&mut self) -> Option<Self::Item>;
// `FuturesExt::next`
fn next(&mut self) -> Next<'_, Self>
where
Self: Unpin; // This is different
This extra Unpin
bound is only needed when a trait is implemented in terms of
poll functions - which by design take Pin<&mut Self>
. And so we need a way to
later on opt out of those bounds. You can see this same mechanism in action with
the other poll-based traits, such as
AsyncWriteExt::write
which also has a Self: Unpin
bound.
Instead if we recognize that the async counterparts to Rust's core traits don't
actually need to be pinned in order to be implemented, we can drop Pin<&mut Self>
from the signature. And since our type isn't pinned to begin with,
we no longer have to opt-out of it being pinned via Unpin
meaning all the
extra Unpin
bounds go away. You can see this in action in the
async-iterator
crate
which provides a diverse range of methods on async iterator, none of which
require additional Unpin
bounds to function.
To stay on the topic of API-shapes: one major downside of fn poll_next
is that
the "method to implement" and "method to call" are different methods. In the
regular iterator trait, there is only one method next
which is both
implemented and called. Instead the poll-API is only meant to be implemented,
and in virtually all cases the next
function is the one you want to call. This
is a major deviation of how all other traits work in the stdlib today.
This isn't just limited to async Iterator
either. Presumably we'd want to
adapt this approach for all traits in the stdlib. That means users of async
Rust would need to think of traits in the stdlib as somehow "different", and
remember that they cannot directly implement the methods they're calling. Among
others, the following APIs would be affected:
poll-based stdlib traits
trait name | to be implemented | to be called | is same? |
---|---|---|---|
async Read | fn poll_read | async fn read | ❌ |
async Write | fn poll_write | async fn write | ❌ |
async BufWrite | fn poll_fill_buf | async fn fill_buf | ❌ |
async Seek | fn poll_seek | async fn seek | ❌ |
Instead, if we base these traits on async fn
, the method to implement and the
method to call are identical. And as we've covered earlier, if anyone would want
to manually author a poll
-based state machine for any of these traits,
poll_fn
provides a uniform way to do so for all async traits:
async-fn based stdlib traits
trait name | to be implemented | to be called | is same? |
---|---|---|---|
async Read | async fn read | async fn read | ✅ |
async Write | async fn write | async fn write | ✅ |
async BufWrite | async fn fill_buf | async fn fill_buf | ✅ |
async Seek | async fn seek | async fn seek | ✅ |
This might seem like a minor point, but we have to consider that every deviation from existing norms is a point of friction for users. To zoom out slightly: I don't believe that async Rust inherently needs to be much more difficult than regular Rust. But the missing language features, combined with subtle differences like these, eventually add up and create an experience which is sufficiently different that the resulting system feels like an entirely different language. When in reality it does not need to be. For good measure here are the existing non-async stdlib traits:
non-async stdlib traits
trait name | to be implemented | to be called | is same? |
---|---|---|---|
Read | fn read | fn read | ✅ |
Write | fn write | fn write | ✅ |
BufWrite | fn fill_buf | fn fill_buf | ✅ |
Seek | fn seek | fn seek | ✅ |
So far we've only discussed the implementation side of the traits. However that
isn't the complete story, and we need to consider auto traits and other subtle
semantics too. So let's start looking at those, starting with
object-safety.
Out of the box poll
-based traits are dyn-safe. Say we wanted to implement a
poll-based version of async iterator which can produce an infinite number of
meows, we could create a dyn variant like so
(playground):
struct Cat;
impl AsyncIterator for Cat {
type Item = String;
fn poll_next(self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
Poll::Ready(Some("meow".to_string()))
}
}
fn dyn_iter() -> Box<dyn AsyncIterator<Item = String>> {
Box::new(Cat {})
}
This is the same as any other dyn-safe trait, and doesn't requiring any
additional steps. Nice! Now what happens if we try and rewrite it to use async fn next
. Well, we would probably try and write it like so
(playground):
#![feature(async_fn_in_trait)]
struct Cat;
impl AsyncIterator for Cat {
type Item = String;
async fn next(&mut self) -> Option<Self::Item> {
Some("meow".to_string())
}
}
fn dyn_iter() -> Box<dyn AsyncIterator<Item = String>> {
Box::new(Cat {})
}
However if we now try and compile this code we now get the following error:
error[E0038]: the trait `AsyncIterator` cannot be made into an object
--> src/lib.rs:16:22
|
16 | fn dyn_iter() -> Box<dyn AsyncIterator<Item = String>> {
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `AsyncIterator` cannot be made into an object
|
note: for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
--> src/lib.rs:5:14
|
3 | trait AsyncIterator {
| ------------- this trait cannot be made into an object...
4 | type Item;
5 | async fn next(&mut self) -> Option<Self::Item>;
| ^^^^ ...because method `next` is `async`
= help: consider moving `next` to another trait
This would not happen if we were using the async-trait
crate, it only happens if we
use the AFIT language feature. But what exactly is going on here? In order for
async traits to work in dyn contexts, we need to find a place to store them
first. In the async-trait
crate this is always a Box
, but in stable Rust we
can't just do that because we've pinky-promised not to perform any "hidden"
allocations 2. Solving this is pretty complicated, and will require a
feature like
dyn*
to land. With dyn*
rather than calling Box::new
you'd need to call a "dyn
adapter" instead. For
example:
Though in the past we have used allocations in language features as
placeholders: that's how async/.await
was originally stabilized in 2019. It
wasn't until a year or so later that support landed for async functions which
didn't box in their desugaring.
fn dyn_iter() -> Box<dyn AsyncIterator<Item = String>> {
Boxed::new(Cat {}) // NOTE: using the `Boxed` dyn-adapter, not `Box`.
}
Needing to replace Box::new
with Boxed::new
is not a major difference. And
it's still unclear what the upper bound on the ergonomics of dyn async traits
are, since work on it has been paused for the past year in favor of stabilizing
AFIT first. But it's going to be pretty confusing if certain async traits
require a Box
, but other async traits require a Boxed
notation.
I believe the better direction is for all async traits in the stdlib to use the same mechanism to work with dyn, even if initially it's slightly less convenient at first. We can then gradually work on improving those ergonomics for all async traits, both in the stdlib and ecosystem. This ensures consistency and enables us to design solutions which are shared by the entire async ecosystem and preventing a possible permanent bifurcation of the async trait space.
Another subtle outcome here is the interaction between async traits and the property has been historically called: "cancellation safety". I'm putting it in quotes because the property is not actually about whether it is "safe to cancel" a future. A few months ago I gave a talk about this, which I'll summarize in this section. I'll explain what "cancellation safety" is, how it currently falls short, and how may be able to fix those shortcomings. I particularly want to show how we can bring the "cancellation safety" property into the type system, which could enable async functions (including AFIT) to automatically provide it.
"Cancellation safety" refers to the ability to drop and recreate futures without any "side-effects" being observable. It's important to note that the word "safety" in this context has nothing to do with memory safety: in the absence of linear types, all types in Rust must be memory-safe to drop, including futures. The "safety" in "cancellation safety" refers to logical correctness only. Being able to drop and recreate futures is the most relevant to the select! macro which commonly does that in a loop. But it can also come in useful if you're ever authoring future state machines by hand.
/// # Cancel safety
///
/// This method is cancel safe. If you use it as the event in a
/// `tokio::select!` statement and some other branch completes
/// first, then it is guaranteed that no data was read.
As I've mentioned, "cancellation safety" as a property currently only exists in
documentation. This means that in order to learn whether you can drop and
recreate futures without any actions you need to consult the documentation for
the method. The difference between async fn next
and fn poll_next
is that
for the former the property will be documented on the implementation, whereas
for the latter the property will be documented on the trait definition.
That practically means that with fn poll_next
you'll be able to remember that
all calls to next
are "cancellation-safe". Whereas with async fn next
you'll need to remember that for each implementation (or more likely: remember
which impls aren't "cancellation-safe").
But we should be legitimately asking ourselves whether using documentation for
this is the best we can do. RTFM is not
really how we do things in the Rust project, instead much preferring if the
compiler can tell you when you've messed up, which can then explain what to do
instead. This is possible because of Rust's "type
safety" guarantee, and APIs such as
select!
are decidedly not type-safe right now. Which is why even after
successfully compiling code, people regularly find runtime bugs in their
select!
statements.
I believe a better direction would be to bring "cancellation safety" out of the
docs and into the type system. We could do this by introducing a new
subtrait,
which I'll refer to in this post as AtomicFuture
(but we can call it anything
we like really, the semantics are what I care about most). We already have some
precedent for this in the iterator family of traits, where for example the
unstable
TrustedLen
trait
provides additional guarantees on top of the existing Iterator
trait. I
imagine this would look something like this:
//! ⚠️ This uses a placeholder name and is intended as a design-sketch only. ⚠️
/// A future which performs at most a single async operation.
trait AtomicFuture: Future {}
One of the downsides of the current "cancellation safety" system is that we have to manually inspect anonymous futures whether they're cancellation-safe or not. I believe by bringing "cancellation safety" into the type system the compiler should be able to figure it out automatically if the following cases are met:
.await
point. That way if a future is cancelled it will never occur between two
.await
points.async
context implement the trait.Any async fn
or async {}
block would automatically be able to implement
AtomicFuture
if those requirements are upheld, which I believe is something
which shouldn't be too hard to figure out from the generated state machine
3. Maybe there is a case to be made for syntax for this too; I'm not
sure. But that's something we can figure out later.
Also: this is something we'll want to do regardless if we ever
get generator functions. It would be pretty bad if gen fn
couldn't
automatically implement marker traits on the type it returns.
// ✅ -> impl Future + AtomicFuture
async fn foo() -> u32 { 12 }
// ✅ -> impl Future + AtomicFuture
async fn foo<F: AtomicFuture>(fut: F) -> F::{
fut.await
}
// ❌ -> impl Future
async fn foo<F: Future>(fut: F) -> F::{
fut.await
}
// ❌ -> impl Future
async fn foo<F: AtomicFuture>(fut1: F, fut2: F) {
fut1.await;
fut2.await;
}
The compiler will only be able to automatically figure out whether
AtomicFuture
can be implemented for futures returned by async fn
and
async {}
. For manually implemented futures the author of the future will need
to uphold those guarantees themselves. The bar to meet there is that the future
should only perform a single operation, and then return. read
is a good
example of a "stateless future", while the future join
operation
is a good example of a future which is not.
As a basis I think this is pretty good, but there are still some cases we
haven't covered yet. For example tokio's Mutex::lock
operation:
only performs one operation, but when it's dropped and recreated it'll be the
last in the unlock queue again - which in pathological cases could lead to
certain unlocks never being scheduled. It's not marked "cancellation-safe" in
the tokio docs, but the reason for that is not covered by the rules we've
described so far.
I think practically we'll want to play around with the rules a little. I don't
think we want or even can practically nail down what a "side-effect" is in Rust.
But if we intentionally don't make the presence of AtomicFuture
a safety
invariant, we can probably get away by adding broad rules such as:
"Cancelling and recreating an
AtomicFuture
should not cause logic errors"
What does that exactly mean? That's up for interpretation. But that might be fine for our purposes. I think it's worth experimenting a little, so we can settle on something that works for us. Ideally a definition which can be automatically implemented by the compiler for async blocks and functions, but if we can't that's probably also okay. The key takeaway here should be that if we care enough about "cancellation-safety" to make it a guarantee of our public APIs, we should be trying to bring it into the trait system instead.
I think if we're to have an honest conversation about poll-based APIs, we should
acknowledge that working with Pin
is not a great experience for most people.
I'm a world-leading expert in async Rust, and four years post stabilization I
still regularly struggle to use it. I've seen folks on both T-lang and T-libs
struggle with it. I've seen very capable engineers at Microsoft struggle with
it. And if experienced folks struggle to use Pin, I don't think we can
reasonably expect those with less experience to use it without much problems either.
I've covered some of the unique difficulties of Pin
on this blog before. I
believe pin is hard to use in part because pin inverts Rust's usual semantics
via a combination of auto-traits and wrapper structs. Another reason is because
it relies heavily on the concept of "pin-projection" which requires unsafe
and
is not directly represented in either the language or the stdlib. Which in turn
means that even the most common interactions with Rust's pinning system rely
third-party ecosystem crates such as
pin-project and until
recently pin-utils.
We currently don't have a clear path to fix Pin's ergonomics. If we want Rust to
provide a way to make pin projections safe, we'll at least need to make a
breaking change to the Unpin
trait 4. As well as a whole lot of
design
work to
integrate it into the language. And while this may be something we'll want to do
eventually, we really need to ask ourselves whether this is something we want to
do now. Because taking on one project necessarily means not taking on another.
Safe pin projections in the language necessarily require that
Unpin
is an unsafe
trait. It's currently not unsafe, and changing it would
be a breaking change.
I honestly think async functions in traits, async drop, async iteration, and
async closures all are far more important things to work on right now, and they
should take priority in WG-async over directly fixing Pin's rough edges. The
most effective strategy to reduce the cost of Pin
we have in Rust right now is
to simply make pin less common. If Pin
is only needed to implement intrusive
collections, self-referential types, and manual future state machines then for
the time being we think of it as an experts-only system. And once we have more
bandwidth we can revisit it later to tidy it up.
A thing that worries me in particular about fn poll_next
is its inability to
co-evolve with Rust's needs. As we've seen we're not just considering adding an
"async version of iterator", we're also talking about "a pinned version
of iterator". But there are other flavors of iterator being discussed as well,
such as: "a fallible version of iterator", "a lending version of
iterator", "a const version of iterator", and so on. It's impossible to
know today which requirements Rust will have ten years from now. All we really
know is that the decisions we make then will be affected by the decisions we
make today 5.
Ten years ago we were unsure whether it was even possible to publish a new systems programming language. Five years ago we were unsure whether Rust would be able to escape its browser niche and reach mainstream adoption. Flash forward to today, and Rust is being used in the hearts of major operating systems (Android, Linux, and Windows). As well as every packet on the internet more likely than not being routed through some Rust code at some point (Cloudflare, AWS, and Azure). I don't think we can reasonably anticipate all the requirements Rust will face ten years from now. So I believe one of the most important things for a programming language is to find ways to keep evolving the language in the face of changing requirements.
Say for a second we did add an fn poll_next
-based version of iterator. If we
wanted to say, add a lending version of iterator. Then we'd probably also want
to add a lending version of async iterator too. That would be four
separate families of traits, and that's just for three variants. If we actually
wanted to add support for "pinned iterators" or "fallible iterators" we'd be
looking at nine or maybe even seventeen different families of traits. And nobody
reasonably wants that.
This is the reason why I believe async fn next
is the better direction. If we
add it to the stdlib via an effect-generic mechanism, both "iterator" and "async
iterator" could be served using a single trait which is generic over the async
effect. But we could use the same mechanism for other traits such as Read
and
Write
, but also maybe some less common traits like Into
and Display
.
//! A version of iterator which can be implemented as either sync or async.
//! This uses placeholder syntax just to illustrate the idea.
#[maybe_async]
trait Iterator {
type Item;
#[maybe_async]
fn next(&mut self) -> Option<Self::Item>;
}
fn poll_next
forces us to duplicate existing interfaces in order to expose
those same capabilities. While async fn next
enables us to extend existing
interfaces with new capabilities. It doesn't immediately solve all of the
limitations of iterator; effect generics won't provide an answer for how to add
support for "lending" or "pinned" iterator variants. But it provides an answer
to some other problems, like how to add support for async, const, and
fallibility. And in doing so encourages us to find similar solutions for the
problems which remain, even if we don't yet know what they'll look like.
In this post I've presented a case for basing the async Iterator trait on async fn next
over fn poll_next
. To summarize the arguments:
async fn next
and fn poll_next
generate identical code, which means they have identical performance profiles.
Implementations which need to access the low-level future state machine of
async fn next
can do so using poll_fn
.fn poll_next
is the async version of
PinnedIterator
, async fn next
is the async version of Iterator
. Adding a
trait for pinned async iteration could be useful, but realistically it should
mirror a synchronous "pinned iterator" design. And it could be written using
async fn next
taking a pinned self-type.Unpin
bounds are a way in which
async traits presently meaningfully deviate from their synchronous counterparts.
"async iterator" meaningfully deviates from its synchronous
counterparts. It makes it seem like deeper changes are needed to get to the
same async functionality. By not requiring self: Pin<&mut Self>
, no
additional where Self: Unpin
bounds are required as seen on methods such as
StreamExt::next
.async fn next
has one method to both
implement and use. fn poll_next
requires two methods: one which must be
implemented, and another which must be called. This is inherently more
difficult to use, and as a mechanism is unique to poll functions other than
Future
.async fn next
and fn poll_next
need to use subtly
different mechanisms to create dynamically dispatched objects. Adding support
for dynamic dispatching is still in-progress for AFITs, but that seems like
it's mostly a matter of time. Neither approach is likely going to be much
harder to use than the other, but the subtle differences may be difficult to
internalize for users. Diagnostics seem like they'll play an important role.fn poll_next
users
can blindly assume every implementation provides a cancellation-safe next
method. If async iterator is based on async fn next
users will have to check
the implementation's docs to learn whether the next
method is
"cancellation-safe". However, rather than keeping "cancellation-safety" as a
documentation-only property, we should probably instead be working to bring
into the type system instead.Pin
family of APIs in Rust is notorious for being
difficult to use. One of the most effective ways we have to reduce the
difficulty of Pin is by limiting user's exposure to it. By basing Rust's core
async traits on async functions we can reduce the number of pin-based APIs,
making async Rust more accessible to more people.async fn next
can be implemented as an extension to the
existing iterator trait via effect generics. fn poll_next
would most need to
be implemented as a standalone trait, resulting in a manual duplication of
the APIs. Support for async is not the last feature we'll want to add to
iterators: there are currently ecosystem demands to support self-referential
iteration, lending iteration, and fallible iteration. It's not practical to
add individual traits for all of these and their combinations. Nor is it
reasonable we can anticipate all needs which may arise in the future. By
extending rather than duplicating we I believe basing the async Iterator trait on async fn next
to be superior
across all axes, and I hope this post sufficiently makes that case. In a future
post I'd like to round out this series by covering the desugaring of an async
iteration notation. Together with an RFC for effect-generic trait definitions,
this will be one of the last steps necessary step before we can re-RFC RFC
2996: Async Iterator
to cover the full scope of async iteration.
Thanks to Eric Holk for reviewing an earlier draft of this post.
Eryn Wells 23/09/2023 | Source: Eryn Rachel Wells
I really enjoyed looking through the images on Docubyte’s Guide to Computing. It depicts machines from the early days of modern computing – think IBM mainframes, PDP-1’s, and lots of midcentury modern design – in a way I found really intriguing.
Anonymous 23/09/2023 | Source: Irrational Exuberance
These are speaking notes for my October 4th, QCon talk in San Francisco.
Slides for this talk.
Over the course of my career, I’ve frequently heard from colleagues, team members and random internet strangers with the same frustration: the company doesn’t have an Engineering strategy. I don’t think this problem is unique to Engineering: it’s also common to hear folks complain that they’re missing a strategy for Product, Design or Business. But, whereas I don’t feel particularly confident speaking to why so many companies are missing a clear Business or Product strategy, I’ve come to have some clear opinions about why so many engineering organizations don’t have a written strategy.
I’ve been fortunate to be involved in architecture at many companies, including designing several iterations of Stripe’s approach to architecture (which taught me some lessons). From that experience, I’ve tried writing about this topic quite a few times:
In this talk, I hope to pull those ideas together, into a unified theory of Engineering strategy, with a particular emphasis on how you can drive strategy even if you’re not the company’s CTO. Another way to think about this talk, is that I hope to “Solve the Engineering Strategy Crisis” that so many people keep emailing me about.
In this talk, I’ll work through five topics around engineering strategy:
Whenever I think about strategy, I start from Richard Rumelt’s Good Strategy, Bad Strategy, which three pillars of effective strategy:
I’ve found that definition extremely useful, and Rumelt’s views have shaped how I think about Engineering strategy. In particular, I believe that Engineering strategy comes down to two core components:
Sure, that sounds nice, but what does that mean? To clarify that a bit, let’s work through an example scenario. This is a scenario that many folks have experienced in their career:
I believe this sequence of events keep reoccuring because of bad strategy, and is preventable with good strategy. Lets work into the components of strategy to look at how strategy could cause and prevent this scenario from happening.
Starting with “honest diagnosis” and in particular, looking at what a bad honest diagnosis would look like for this scenario. (For the record, I don’t think “dishonest” is the opposite of an “honest” diagnosis, they tend to be “bad” rather than “dishonest.”)
Here’s a bad diagnosis:
OK, but then let’s briefly consider what a good diagnosis might look like:
Disappointingly, this is the same list in both cases. In a small startup with only one simple product, you probably can migrate from a monolith to services in a few months, maybe even less. In a larger startup, that’s almost certainly impossible.
An honest diagnosis is a reality-based assessment of your circumstances. Nothing is universally honest. (Neither is anything universally bad.)
Once you find a reality-based assessment to inform your honest diagnosis, the second half of your strategy, a practical approach. The most important thing to keep in mind is that a practical approach makes explicit tradeoffs that acknowledge your real constraints, for example, here are some good approaches, even if they are a bit painful to write:
What makes these good is not that they’re beautiful, ambitious statements of how we work. These are not loft “engineering values”, they are specific acknowledgments of how you’ll navigate your constraints.
Thinking back to our scenario with Hammer and Widget products, our practical approach might look like:
Once again, tragically, a practical approach depends on your company and your circumstances. You could write the same exact practical approach and have it go very badly indeed, which is why senior leaders often fail when they reapply familiar strategies at new companeis.
Hopefully you’ll accept the definition of “engineering strategy = honest diagnosis + practical approach”. Next, is to try to convince you that this definition is actually useful.
Let’s start making the case for engineering strategy by talking through some practical examples of enginering strategy that I’ve encountered in my career.
Diagnosis:
Approach:
Impact of Stripe’s strategy:
Diagnosis:
Approach:
1.. We are a product engineering company 2. We adopt new technologies to create valuable product capabilities 3. We do not adopt technologies for other reasons 4. We write all code in the monolith unless there is a functional requirement that makes it extremely difficult to do so 5. Exceptions to the above are granted exclusively by the CTO, who will approve in writing in the #engineering channel
Impact of Calm’s strategy:
Diagnosis:
Approach:
Impact of Uber’s strategy:
These strategies are effective for a few reasons:
This is the power of making explicit, consistent tradeoffs across an entire organization.
In addition to arguing the value of strategy from these positive examples, it’s easy to find negative examples where a missing or inconsistent strategy caused a great deal of pain:
I’m sure you can think of examples from your careers as well!
Interestingly, Uber and Stripe are well-known technology companies, and I wrote a bit above about their technology strategies were, but neither were particularly proactive at writing their strategies down.
I’ve come to believe that:
This is the first really important takeaway from this talk: you can solve half the engineering strategy crisis by just writing stuff down.
We’ll get to solving the other half in a second.
There are probably an infinite number of reasons why written strategy outperforms implicit strategy, but a few that I’ve seen matter in particularly important ways are:
Two primary ways:
This strategy is a modified version of the one describes in Writing an engineering strategy. At it’s core, the thing to recognize is: it’s easy to get CTO buy-in if you write the strategy that the CTO wants.
To do that:
If you’re reading this and your biggest thought is, “My CTO will never let me do this”, then 7 out of 10 times, I promise you that either you’re not writing the strategy that the CTO wants. The other 3 out of 10 times, there’s some internal conflict that the CTO just isn’t willing or able to resolve, which is a bit trickier, but you can approach via the next strategy.
The approach to bottoms-up rollout is described in Write five, then synthesize:
This approach definitely takes a long time, but I’ve seen it work a number of times. Even if your current strategy has some gaps in it, birthing it into an explicit strategy document will always make it much easier to address those gaps.
Here’s what we talked about:
Within those topics, the two disappointingly straightforward steps that you can talk to solve the engineering strategy crisis are:
This might not be what you were excited to do when you wrote about getting more strategic in your annual goals, but it’s what actually works.
Augusto Campos 20/09/2023 | Source: TRILUX
Chegou hoje o novo teclado dobrável que vai deixar ainda mais compacto o meu kit do dia-a-dia de profissional móvel, que já foi uma mochila levando um notebook, depois passou a ser um estojo com um teclado compacto, e agora – se tudo correr bem – será um estojo para óculos ou uma nécessaire.
A foto mostra o recém-chegado, aqui em casa, conectado ao celular (que está encaixado no suporte móvel que faz parte do próprio teclado) e, no detalhe, mostrando o teclado dobrado e fechado, em formato retangular de 20 x 4,5cm, e 1,5cm de altura - mais ou menos no formato de um estojo para 3 canetas.
Ainda vou testar melhor e mais longamente, mas a primeira impressão foi positiva, digitei com conforto, o suporte para o celular era firme o suficiente para usar numa mesa de trabalho, a conexão Bluetooth funcionou de primeira.
Optei pelo layout norte-americano, porque é meu hábito. Vi vendedores oferecendo um modelo com um layout "português", e não sei se isso se referia ao idioma, e seria o nosso padrão ABNT, ou se era uma referência a Portugal.
Optei pelo layout norte-americano, porque é meu hábito. Vi vendedores oferecendo um modelo similar com um layout "português", e não sei se isso se referia ao idioma, e seria o nosso padrão ABNT, ou se era uma referência a Portugal.
Como em outros casos de teclado compacto (inclusive o que eu já vinha usando, que tinha teclas individuais de tamanho normal), há sobrecarga de teclas, com alguns acentos e símbolos exigindo usar combinações da tecla Fn. Nada grave, e já estamos acostumados (melhor seria ter teclas físicas pra todas as funções, mas é um equilíbrio delicado quando valorizamos o formato compacto).
Ele se conecta a 3 dispositivos diferentes, com combinações de teclas para alternar entre eles. O carregamento é via MicroUSB e o anúncio promete 50h de uso a cada carga completa, mas estou longe de poder testar isso, pois ele acaba de chegar por aqui
Comprei no AliExpress, neste vendedor, mas não tenho como te confirmar que era o mais barato, ou o mais rápido, etc. Encomendei no dia 10 de setembro e chegou hoje, menos de duas semanas depois. Paguei o preço do anúncio no site, sem acréscimos nem procedimentos alfandegários.
O artigo "Um teclado dobrável pra ser profissional móvel com o celular" foi originalmente publicado no site TRILUX, de Augusto Campos.
Yegor Bugayenko 19/09/2023 | Source: Yegor Bugayenko
A friend of mine recently asked me what five things he should do in order to grow his technical career in a big company. He is not interested in being a big manager, or a CEO. Rather, he wants to be a software expert, an architect, an owner of a technology, and eventually a “Fellow.” I’m not sure I was qualified to give such advice, but I did anyway. This is what I told him. Maybe this will also work for you.
Stay focused on one problem for many years. I literally mean a “problem”—something that bothers people now but will stop bothering them when you solve it. Ideally, first and foremost, it should bother you personally. If you can’t specify in one sentence what the meaning of your office life is—you don’t have a problem to solve. Find one.
A strong multi-year focus on one particular problem will most likely lead to a rather boring office life. People around you will be switching projects, accepting offers from crypto-startups, changing technologies, programming languages, and teams. You, unlike them, will remain focused on one thing for years and years. Imagine how boring it will look to them and to yourself. So be it. Accept it.
Moreover, if you don’t see significant results (and you won’t for years!), you’ll be tempted to switch to something else, where the outcomes seem more promising. Don’t.
Even when you change companies, remain loyal to the problem you chose as “yours” years ago. Don’t betray it. It’s yours. Your lifetime mission is to solve it. Who cares which company you are in? A company is just a temporary sponsor of your mission.
The problem must be as monumental as finding a cure for cancer. Ensure it’s bigger than your team, your company, and even your lifespan. The word “ambitious” certainly fits: it must be an ambitious idea. How do you know it’s big and ambitious enough? Count your enemies. If you have many of them—which could include your bosses, colleagues, spouse, and, of course, your haters on Twitter—you have a solid case. Conversely, if everyone loves your idea and supports you, your challenge might not be big enough.
Think about it: If it is big enough, many people have already tried to solve it. They failed. Naturally, they would love to see you fail too. If you don’t, it could dent their self-respect. It’s basic psychology.
The more enemies, the better! However, you should have a few allies. I’m referring to high-level technical people, like a CTO, VP of Technology, Chief Architect, or Fellow. They might not be technically competent in your particular domain, but that doesn’t matter. Strive to establish an information channel between you and them, and periodically share updates. Keep them informed about your progress and occasionally seek their advice. They will shield you from most of the attacks your enemies might launch.
To clarify, it’s impossible to ascend in a human hierarchy on your own, no matter how bright you are. You need a cadre of supporters within the company—individuals who back you unconditionally. A few are sufficient. They must be personally loyal to you. If you leave the company, they should follow you without hesitation.
It would be ideal for all of these friends to be part of your team. However, that’s not always feasible. Similarly, it would be wonderful if all these friends were technically competent, but that’s not always the case. In contrast, loyalty doesn’t often coincide with expertise. Having a friend who is both loyal and intelligent is a luxury.
Finally, maintain a connection with the younger generation that’s succeeding us—students. Engage with them, learn from them, and ensure you understand their needs and aspirations. They represent the industry’s future. If you treat them right, they will work for you with enthusiasm unmatched by any other employee.
Strengthening ties with the academic world will unquestionably reinforce your position within your company.
Eli Bendersky 16/09/2023 | Source: Eli Bendersky's website
I put together a simple static file server in Go - useful for local testing of web applications. Check it out at https://github.com/eliben/static-server
If you have Go installed on your machine, you don't have to download anything else; you can run:
$ go run github.com/eliben/static-server@latest
And it will start serving the current directory! Run it with -help for usage information. No configuration files needed - the default is useful and you can adjust it to your needs using command-line flags.
When developing web applications locally, for basic test cases we can open an HTML file directly in the browser (using file:/// scheme). However, this is sometimes insufficient, and in several scenarios it's necessary to properly serve the HTML (along with its JS and CSS). Some cases where I encountered this are web applications that use at least one of:
In the past, when I was more active in the Python ecosystem, I used python -m SimpleHTTPServer <port> quite a bit. While it's nice, it has some issues too: it's not very configurable, and it requires Python to be installed.
Another option I've used is http-server from the Node.js ecosystem; in fact, it has served as the inspiration for static-server. You can run it with npx without installing, and it's also configurable through command-line flags, without requiring configuration files.
But we can't expect all Go developers to have npm or npx installed. Moreover, sometimes you want to tweak the server a bit and digging in JavaScript is not any Go programmer's idea of a good time. Like many tools in that ecosystem, this Node.js-based HTTP server is all in on dependencies - with 13 of them, it's not easy to understand or modify its code; much of it is split across multiple helper packages, and making changes can be tricky.
Spinning up a static file server in Go is very easy - I wrote a whole blog post about the possibilities at some point. The simplest static server to serve the current working directory is just:
package main
import "net/http"
func main() {
port := ":8080"
handler := http.FileServer(http.Dir("."))
http.ListenAndServe(port, handler)
}
Having found myself plopping a small server.go with these contents in too many web projects, I decided enough was enough. Thus static-server was born.
static-server is simple, yet versatile. It will do the right thing by default, with no flags whatsoever. But you can also use flags to configure a few aspects, e.g.: the port it serves on, CORS support, serving via TLS, control how logging is done.
static-server is hackable and easy to understand. All the code is in a single file (with fewer than 200 lines of code, including comments and handling flags) and there are no dependencies (except one package that is only used for testing).
I find static-server very useful, and I hope others will too. If you run into any problems or have questions, open a GitHub issue or send me an email.
Augusto Campos 15/09/2023 | Source: TRILUX
Em 1998, fazer o download (discado!) de uma imagem ISO para gerar CD de instalação de Linux, BSD e outros sistemas ainda era complicado, e surgiu essa loja on-line que vendia cada CD a US$ 2 (mais envio pro BR), sem impressos, capas, suporte, manual ou firulas.
Era uma espécie de sucessor da Walnut Creek (simtel⸱net ou cdrom⸱com), que desde 1991 nos abastecia com CDs de distribuições, mas os oferecia em embalagens caprichadas, muitas vezes acompanhadas de manual impresso, e a preços mais elevados.
Eu logo a troquei pela Cheapbytes, mas era fã da Walnut Creek desde a minha fase de OS/2, no começo da década – e comprei ou recebi promocionalmente dela muita coisa além de distribuições Linux: CDs com livros do projeto Gutenberg, com a íntegra dos repositórios do GNU, software para OS/2, sistemas operacionais BSD, repositórios Perl, clip-arts e muito mais.
Na virada para o novo século, a banda larga tornou acessíveis os downloads, e muita gente passou a contar com os CDs grátis encaminhados pelo correio pelo projeto Ubuntu, e esses nomes da cultura dos anos 90 acabaram saindo de circulação, mas não serão esquecidos.
Um adendo.
Como tributo a essas lojas, deixo meu relato: minha primeira instalação de Linux em um computador pessoal (meu, e não de algum empregador) foi com esse CD duplo de Slackware da CDROM·COM, em 1996.
Eu já era usuário de AIX e HP-UX (+ GNU), então me senti em casa – mas o CD vinha com um livrão contendo HOW-TOs e man pages, impressos.
Instalei fácil, mas fazer funcionar a placa de vídeo em modo gráfico, o X, drive de CD e modem foi uma luta de semanas.
O artigo "Quem lembra da Cheapbytes?" foi originalmente publicado no site TRILUX, de Augusto Campos.
Julia Evans 14/09/2023 | Source: Julia Evans
Hello! I was talking to a friend about how git works today, and we got onto the
topic – where does git store your files? We know that it’s in your .git
directory, but where exactly in there are all the versions of your old files?
For example, this blog is in a git repository, and it contains a file called
content/post/2019-06-28-brag-doc.markdown
. Where is that in my .git
folder?
And where are the old versions of that file? Let’s investigate by writing some
very short Python programs.
.git/objects
Every previous version of every file in your repository is in .git/objects
.
For example, for this blog, .git/objects
contains 2700 files.
$ find .git/objects/ -type f | wc -l
2761
note: .git/objects
actually has more information than “every previous version
of every file in your repository”, but we’re not going to get into that just yet
Here’s a very short Python program
(find-git-object.py) that
finds out where any given file is stored in .git/objects
.
import hashlib
import sys
def object_path(content):
header = f"blob {len(content)}\0"
data = header.encode() + content
digest = hashlib.sha1(data).hexdigest()
return f".git/objects/{digest[:2]}/{digest[2:]}"
with open(sys.argv[1], "rb") as f:
print(object_path(f.read()))
What this does is:
blob 16673\0
) and combine it with the contentse33121a9af82dd99d6d706d037204251d41d54
in this case).git/objects/e3/3121a9af82dd99d6d706d037204251d41d54
)We can run it like this:
$ python3 find-git-object.py content/post/2019-06-28-brag-doc.markdown
.git/objects/8a/e33121a9af82dd99d6d706d037204251d41d54
The term for this storage strategy (where the filename of an object in the database is the same as the hash of the file’s contents) is “content addressed storage”.
One neat thing about content addressed storage is that if I have two files (or
50 files!) with the exact same contents, that doesn’t take up any extra space
in Git’s database – if the hash of the contents is aabbbbbbbbbbbbbbbbbbbbbbbbb
, they’ll both be stored in .git/objects/aa/bbbbbbbbbbbbbbbbbbbbb
.
If I try to look at this file in .git/objects
, it gets a bit weird:
$ cat .git/objects/8a/e33121a9af82dd99d6d706d037204251d41d54
x^A<8D><9B>}s<E3>Ƒ<C6><EF>o|<8A>^Q<9D><EC>ju<92><E8><DD>\<9C><9C>*<89>j<FD>^...
What’s going on? Let’s run file
on it:
$ file .git/objects/8a/e33121a9af82dd99d6d706d037204251d41d54
.git/objects/8a/e33121a9af82dd99d6d706d037204251d41d54: zlib compressed data
It’s just compressed! We can write another little Python program called decompress.py
that uses the zlib
module to decompress the data:
import zlib
import sys
with open(sys.argv[1], "rb") as f:
content = f.read()
print(zlib.decompress(content).decode())
Now let’s decompress it:
$ python3 decompress.py .git/objects/8a/e33121a9af82dd99d6d706d037204251d41d54
blob 16673---
title: "Get your work recognized: write a brag document"
date: 2019-06-28T18:46:02Z
url: /blog/brag-documents/
categories: []
---
... the entire blog post ...
So this data is encoded in a pretty simple way: there’s this
blob 16673\0
thing, and then the full contents of the file.
One thing that surprised me here is the first time I learned it: there aren’t
any diffs here! That file is the 9th version of that blog post, but the version
git stores in the .git/objects
is the whole file, not the diff from the
previous version.
Git actually sometimes also does store files as diffs (when you run git gc
it
can combine multiple different files into a “packfile” for efficiency), but I
have never needed to think about that in my life so we’re not going to get into
it. Aditya Mukerjee has a great post called Unpacking Git packfiles about how the format works.
Now you might be wondering – if there are 8 previous versions of that blog
post (before I fixed some typos), where are they in the .git/objects
directory? How do we find them?
First, let’s find every commit where that file changed with git log
:
$ git log --oneline content/post/2019-06-28-brag-doc.markdown
c6d4db2d
423cd76a
7e91d7d0
f105905a
b6d23643
998a46dd
67a26b04
d9999f17
026c0f52
72442b67
Now let’s pick a previous commit, let’s say 026c0f52
. Commits are also stored
in .git/objects
, and we can try to look at it there. But the commit isn’t
there! ls .git/objects/02/6c*
doesn’t have any results! You know how we
mentioned “sometimes git packs objects to save space but we don’t need to worry
about it?“. I guess now is the time that we need to worry about it.
So let’s take care of that.
So we need to unpack the objects from the pack files. I looked it up on Stack Overflow and apparently you can do it like this:
$ mv .git/objects/pack/pack-adeb3c14576443e593a3161e7e1b202faba73f54.pack .
$ git unpack-objects < pack-adeb3c14576443e593a3161e7e1b202faba73f54.pack
This is weird repository surgery so it’s a bit alarming but I can always just clone the repository from Github again if I mess it up, so I wasn’t too worried.
After unpacking all the object files, we end up with way more objects: about 20000 instead of about 2700. Neat.
find .git/objects/ -type f | wc -l
20138
Now we can go back to looking at our commit 026c0f52
. You know how we said
that not everything in .git/objects
is a file? Some of them are commits! And
to figure out where the old version of our post
content/post/2019-06-28-brag-doc.markdown
is stored, we need to dig pretty
deep into this commit.
The first step is to look at the commit in .git/objects
.
The commit 026c0f52
is now in
.git/objects/02/6c0f5208c5ea10608afc9252c4a56c1ac1d7e4
after doing some
unpacking and we can look at it like this:
$ python3 decompress.py .git/objects/02/6c0f5208c5ea10608afc9252c4a56c1ac1d7e4
commit 211tree 01832a9109ab738dac78ee4e95024c74b9b71c27
parent 72442b67590ae1fcbfe05883a351d822454e3826
author Julia Evans <[email protected]> 1561998673 -0400
committer Julia Evans <[email protected]> 1561998673 -0400
brag doc
We can also get same information with git cat-file -p 026c0f52
, which does the same thing but does a better job of formatting the data. (the -p
option means “format it nicely please”)
This commit has a tree. What’s that? Well let’s take a look. The tree’s ID
is 01832a9109ab738dac78ee4e95024c74b9b71c27
, and we can use our
decompress.py
script from earlier to look at that git object. (though I had to remove the .decode()
to get the script to not crash)
$ python3 decompress.py .git/objects/01/832a9109ab738dac78ee4e95024c74b9b71c27
b'tree 396\x00100644 .gitignore\x00\xc3\xf7`$8\x9b\x8dO\x19/\x18\xb7}|\xc7\xce\x8e:h\xad100644 README.md\x00~\xba\xec\xb3\x11\xa0^\x1c\xa9\xa4?\x1e\xb9\x0f\x1cfG\x96\x0b
This is formatted in kind of an unreadable way. The main display issue here is that
the commit hashes (\xc3\xf7$8\x9b\x8dO\x19/\x18\xb7}|\xc7\xce\
…) are raw
bytes instead of being encoded in hexadecimal. So we see \xc3\xf7$8\x9b\x8d
instead of c3f76024389b8d
. Let’s switch over to using git cat-file -p
which
formats the data in a friendlier way, because I don’t feel like writing a
parser for that.
$ git cat-file -p 01832a9109ab738dac78ee4e95024c74b9b71c27
100644 blob c3f76024389b8d4f192f18b77d7cc7ce8e3a68ad .gitignore
100644 blob 7ebaecb311a05e1ca9a43f1eb90f1c6647960bc1 README.md
100644 blob 0f21dc9bf1a73afc89634bac586271384e24b2c9 Rakefile
100644 blob 00b9d54abd71119737d33ee5d29d81ebdcea5a37 config.yaml
040000 tree 61ad34108a327a163cdd66fa1a86342dcef4518e content <-- this is where we're going next
040000 tree 6d8543e9eeba67748ded7b5f88b781016200db6f layouts
100644 blob 22a321a88157293c81e4ddcfef4844c6c698c26f mystery.rb
040000 tree 8157dc84a37fca4cb13e1257f37a7dd35cfe391e scripts
040000 tree 84fe9c4cb9cef83e78e90a7fbf33a9a799d7be60 static
040000 tree 34fd3aa2625ba784bced4a95db6154806ae1d9ee themes
This is showing us all of the files I had in the root directory of the
repository as of that commit. Looks like I accidentally committed some file
called mystery.rb
at some point which I later removed.
Our file is in the content
directory, so let’s look at that tree: 61ad34108a327a163cdd66fa1a86342dcef4518e
$ git cat-file -p 61ad34108a327a163cdd66fa1a86342dcef4518e
040000 tree 1168078878f9d500ea4e7462a9cd29cbdf4f9a56 about
100644 blob e06d03f28d58982a5b8282a61c4d3cd5ca793005 newsletter.markdown
040000 tree 1f94b8103ca9b6714614614ed79254feb1d9676c post <-- where we're going next!
100644 blob 2d7d22581e64ef9077455d834d18c209a8f05302 profiler-project.markdown
040000 tree 06bd3cee1ed46cf403d9d5a201232af5697527bb projects
040000 tree 65e9357973f0cc60bedaa511489a9c2eeab73c29 talks
040000 tree 8a9d561d536b955209def58f5255fc7fe9523efd zines
Still not done…
The file we’re looking for is in the post/
directory, so there’s one more tree:
$ git cat-file -p 1f94b8103ca9b6714614614ed79254feb1d9676c
.... MANY MANY lines omitted ...
100644 blob 170da7b0e607c4fd6fb4e921d76307397ab89c1e 2019-02-17-organizing-this-blog-into-categories.markdown
100644 blob 7d4f27e9804e3dc80ab3a3912b4f1c890c4d2432 2019-03-15-new-zine--bite-size-networking-.markdown
100644 blob 0d1b9fbc7896e47da6166e9386347f9ff58856aa 2019-03-26-what-are-monoidal-categories.markdown
100644 blob d6949755c3dadbc6fcbdd20cc0d919809d754e56 2019-06-23-a-few-debugging-resources.markdown
100644 blob 3105bdd067f7db16436d2ea85463755c8a772046 2019-06-28-brag-doc.markdown <-- found it!!!!!
Here the 2019-06-28-brag-doc.markdown
is the last file listed because it was
the most recent blog post when it was published.
Finally we have found the object file where a previous version of my blog post
lives! Hooray! It has the hash 3105bdd067f7db16436d2ea85463755c8a772046
, so
it’s in git/objects/31/05bdd067f7db16436d2ea85463755c8a772046
.
We can look at it with decompress.py
$ python3 decompress.py .git/objects/31/05bdd067f7db16436d2ea85463755c8a772046 | head
blob 15924---
title: "Get your work recognized: write a brag document"
date: 2019-06-28T18:46:02Z
url: /blog/brag-documents/
categories: []
---
... rest of the contents of the file here ...
This is the old version of the post! If I ran git checkout 026c0f52 content/post/2019-06-28-brag-doc.markdown
or git restore --source 026c0f52 content/post/2019-06-28-brag-doc.markdown
, that’s what I’d get.
git log
worksThis whole process we just went through (find the commit, go through the
various directory trees, search for the filename we wanted) seems kind of long
and complicated but this is actually what’s happening behind the scenes when we
run git log content/post/2019-06-28-brag-doc.markdown
. It needs to go through
every single commit in your history, check the version (for example
3105bdd067f7db16436d2ea85463755c8a772046
in this case) of
content/post/2019-06-28-brag-doc.markdown
, and see if it changed from the previous commit.
That’s why git log FILENAME
is a little slow sometimes – I have 3000 commits in this
repository and it needs to do a bunch of work for every single commit to figure
out if the file changed in that commit or not.
Right now I have 1530 files tracked in my blog repository:
$ git ls-files | wc -l
1530
But how many historical files are there? We can list everything in .git/objects
to see how many object files there are:
$ find .git/objects/ -type f | grep -v pack | awk -F/ '{print $3 $4}' | wc -l
20135
Not all of these represent previous versions of files though – as we saw
before, lots of them are commits and directory trees. But we can write another little Python
script called find-blobs.py
that goes through all of the objects and checks
if it starts with blob
or not:
import zlib
import sys
for line in sys.stdin:
line = line.strip()
filename = f".git/objects/{line[0:2]}/{line[2:]}"
with open(filename, "rb") as f:
contents = zlib.decompress(f.read())
if contents.startswith(b"blob"):
print(line)
$ find .git/objects/ -type f | grep -v pack | awk -F/ '{print $3 $4}' | python3 find-blobs.py | wc -l
6713
So it looks like there are 6713 - 1530 = 5183
old versions of files lying
around in my git repository that git is keeping around for me in case I ever
want to get them back. How nice!
Here’s the gist with all the code for this post. There’s not very much.
I thought I already knew how git worked, but I’d never really thought about
pack files before so this was a fun exploration. I also don’t spend too much
time thinking about how much work git log
is actually doing when I ask it to
track the history of a file, so that was fun to dig into.
As a funny postscript: as soon as I committed this blog post, git got mad about
how many objects I had in my repository (I guess 20,000 is too many!) and
ran git gc
to compress them all into packfiles. So now my .git/objects
directory is very small:
$ find .git/objects/ -type f | wc -l
14
Vadim Kravcenko 14/09/2023 | Source: Vadim Kravcenko
In the software development realm, asking questions isn’t just a right—it’s a downright necessity. Let’s cut the crap and dive straight in: if you’re not asking questions, you’re doing a disservice to your career as a developer.
Remember those early days, navigating the linux forums, throwing in a question, and getting smacked with a response so arrogant it could only be rivaled by a peacock in full strut? Yeah, that was us learning the ropes, the hard way. It was a brutal initiation into the art of question-asking, a skill as vital as coding itself. Can’t say I want to go back to those times. That’s why I want to talk about asking questions, so you don’t have to learn like I did.
Here’s the kicker: asking questions isn’t just about dodging the next error message or figuring out why your python script is dragging its feet like a toddler refusing to leave a toy store. It’s about carving out the path of success for your project, ensuring everyone is marching to the same beat, and expanding your understanding of the domain you’re knee-deep in.
Let’s not just ask questions, let’s ask the right ones, and let’s do it without beating around the bush. Because in this game, a well-placed question can be the difference between a project that soars and one that’s ten times over budget. Let’s get to it.
Alright, it’s time to delve deeper into the anatomy of questions you ought to be throwing around in your daily grind. Essentially, we’re looking at a two-sided approach here: tackling technical issues and navigating project management, which yes, involves dealing with people and their quirks.
First up, the technical questions. These are your bread and butter as an engineer. They’re the questions that help you untangle code complexities, optimize processes, and essentially, just be a badass at what you do. These questions are your tools to chisel away at a problem until you reveal the elegant solution hidden beneath and get the AHA! moment.
Now, onto the project management questions, or as I like to call them, the “people questions”. These are equally vital, if not more so. They’re the questions that help you navigate the dynamics of a project, ensuring that everyone is rowing in the same direction. These questions help you gauge the pulse of your team, understand their concerns, and align their efforts towards a common goal. It’s about fostering collaboration through transparency — the more questions you ask, the clearer the destination gets.
First things first, before you shoot any technical question — get intimate with Google or any search engine of your preference e.g. DuckDuckGo. I mean, really get in there. Dive deep into the ocean of information available at your fingertips. You don’t want to be the one caught asking questions that scream “I didn’t bother to look this up”, do you?
Learn the search syntax that helps you isolate terms and look for exact matches of the issue that you’re looking solutions for — these are your life safers. For example, if you get an exception — look for an exact match of the error code, there’s always at least one person on the internet before you, who had a similar issue before and decided to share it. (We’ll talk about sharing later)
If your issue is not just an exception — start by understanding the core of your problem. Break it down into parts, and then start your hunt for information.
For example — your Hackintosh not booting is a complicated issue that should be broken down into multiple sub-searches. You’re going to start with general reasons why the PC can not boot, then dive into the boot loader, then dive into drivers, then into kexts, and so on — and each step will take you closer to a solution (might take you days though, from experience).
Look for similar problems or discussions online, delve into specialized forums, blogs, or library documentation — I can’t tell you how many times I’ve found my answer not on on StackOverflow but on some obscure highly specialized blog with a single post about the exact issue that I was having. Bless these kind of people.
The goal here is, hopefully, to solve your issue without asking, or to grasp the finer details of your problem, to understand the underlying concepts and find a light that illuminates a path to a potential solution.
So, gear up, do your homework, and come prepared with a question that reflects your effort and genuine curiosity. It not only saves time but also paves the way for a more enriched and insightful conversation. Remember, a well-researched question is the first step towards a meaningful answer.
Before asking any question, check if you’re not leaning into a common bias where you’re so focused on the solution that you think you need, that you ignore all the rest.
It’s called the XY problem — a scenario where you get so fixated on your perceived solution (X) to a problem that you overlook or bypass the actual issue (Y) at hand. This can lead to you asking the wrong questions and answers that don’t really get to the heart of the matter.
To avoid falling into this trap, start by taking a step back to analyze the core issue you’re facing. Ignore the solution. Assume you know nothing. It’s essential to separate the problem from your initial approach to solving it. Be open to the possibility that your initial approach might not be the best or even the correct one.
Next, when formulating your question — don’t emphasize your perceived solution you have in mind, focus on articulating the problem. Remember, the goal is to solve the problem in the most efficient manner, not to get attached to a particular solution.
There’s no such thing as a “stupid” question in the world of software development. There, I said it. Yes, there might be questions where you feel that the person did not put much effort into solving it on his own before coming to you, but there are no stupid question. Whether you’re a junior developer or a seasoned staff-level engineer at Google — asking questions, even those that seem “dumb”, is not just okay, it’s necessary.
No one knows everything, and if senior developers show “vulnerability” of admitting stuff they don’t know, they pave the way for junior devs to feel comfortable asking questions. It’s a win-win.
Ditch the fear of looking dumb. If you’ve spent a solid 15 minutes trying to figure something out on your own, you’ve earned the right to ask that question of your peers. And hey, if your question leads to more questions, that’s even better — that means there’s even more stuff you will learn today.
Now, let’s address the elephant in the room: the antisocial tendencies of many engineers. Yes, some engineers might prefer to keep to themselves, but showing that you’ve done your homework before approaching them can break down those barriers. It signals that you respect their time as much as your own.
There’s this concept of Slack channel where any question is allowed. Regardless of how dumb you think it is — it’s allowed. It’s basically a safe space where junior to mid-level devs can ask anything without fear of judgment, often finding answers among themselves. It works because it removes the pressure of bothering a potentially busy individual, fostering a community around not being afraid of asking questions in this channel.
So, let’s redefine the narrative: there are no “stupid” questions, only opportunities to drive forward with genuine curiosity and a desire to learn. Let’s encourage engineers at all levels to embrace this mindset. It’s simple, but not easy, yet it’s what propels us forward in the ever-evolving world of tech.
Alright, now that you’ve done your homework, and for sure haven’t found the answer on the internet it’s time to understand the context of who should you ask the question. I’d like to point out, that the nature of your question dictates the ideal respondent and the channel of communication.
Here are some hypothetical scenarios that showcase that different questions require different medium and tone of voice as well as different levels of formality:
But as you see, different questions – different respondents – different ways to ask the question.
Choose your communication platform wisely — the medium through which you ask the question is as important as the question itself. If you send a message to someone in Slack, that you know is rarely there — then don’t wonder if it takes them weeks to reply. Know when to keep it casual and when to get all formal and serious.
Here are some rules of thumb that I try to follow (not saying you should do, but you can see how I approach these issues)
So imagine two situations where I don’t know something — either technical or from the management perspective.
Let’s focus on the first technical scenario first:
For the second scenario — when I have questions about the project that is not related to code:
I get that it’s a bit rudimentary, but it works. All project related questions must be asked in good-faith to show a certain level of respect to your peers and overall progress for everyone involved. So now that we know what to ask and who to ask, it’s about time we start asking.
In the digital world, time is of the essence. Ditch the “hello” and other unnecessary preliminaries that serve as mere fillers. This is one of those things that annoys me a lot: when a person writes “Hello”, and that’s it, waiting for response. I would prefer people get straight to the point in the initial message, so I can understand if I’m the right person for it or should point in a different direction. It not only saves time but also signals that you respect the other person’s time and are serious about finding a solution.
Avoid making unscheduled calls to someone, as it demands their immediate and undivided attention, potentially disrupting their current flow. Also, just because someone responds to a chat doesn’t necessarily mean they are available for a more in-depth voice or video conversation.
Instead of the abrupt approach or simply asking, “do you have time for a call?” — which is slightly better but still not ideal — consider framing your request more thoughtfully. For instance:
"Hey Name, hope you’re doing great, are you available for a quick X-minute discussion about XYZ in about Y minutes (alternatively at XX:00)?”
This approach is beneficial for several reasons:
If the recipient is unable to respond immediately, it serves as a reminder of the topic you intended to discuss when they do get back to you. By adopting this method, you foster a respectful and considerate communication environment.
Okay, let’s get down to brass tacks here. Constructing a question that hits the mark is an art in itself. First off, lay the groundwork by stating what you already know. Be clear, concise, and straight to the point. Short, bullet-point style things that you have tried and learned about the problem.
Now, let’s talk about the meat of your question. Avoid complex sentences that leaves people scratching their heads. Complexity is your enemy here; clarity, your ally. You want every sentence to have maximum value with zero fluff. The respondent doesn’t want to wade through layers of ambiguity to get to the core of what you’re asking.
Your ultimate goal?
There’s actually a great method for figuring out what you want to ask — the rubber duck method, it’s mostly used to solve hard problems, but it also helps you structure your thoughts in form of a question, as the duck acts as a sparring partner.
Acquire a rubber duck, preferably of the kind you’d find in a bathtub.
Position the rubber duck on your desk and politely tell it that you plan to walk through some code with its assistance.
Begin by describing to the duck the expected results, followed by a detailed walkthrough of each line of code.
As you articulate your process aloud, you may suddenly notice a discrepancy between what you intended to do and what the code is actually doing. Despite its silent presence, the duck has facilitated this realization, aiding you in identifying the issue
Remember, the quality of the answers you receive is directly proportional to the clarity and precision of your question.
Enjoyed the read? Subscribe to read more articles from me.
After you’ve got your answer, it’s not time to move on just yet. It’s time for some exercise in comprehension. Summarise what the person has told you and ask them if you understood it correctly.
Now, let’s talk about the ripple effect of knowledge — it’s time to pay it forward.
You can pay it forward in several different ways:
This act of sharing not only benefits others who might grapple with similar issue in the future but also reinforces your understanding.
That’s basically it, hope it gave you some insights, if not, here are some more resources that might interest you:
Other Newsletter Issues:
The post Asking questions the right way appeared first on Vadim Kravcenko.
Adrian 11/09/2023 | Source: death and gravity
Are you having trouble figuring out when to use classes or how to organize them?
Have you repeatedly searched for "when to use classes in Python", read all the articles and watched all the talks, and still don't know whether you should be using classes in any given situation?
Have you read discussions about it that for all you know may be right, but they're so academic you can't parse the jargon?
Have you read articles that all treat the "obvious" cases, leaving you with no clear answer when you try to apply them to your own code?
My experience is that, unfortunately, the best way to learn this is to look at lots of examples.
Most guidelines tend to either be too vague if you don't already know enough about the subject, or too specific and saying things you already know.
This is one of those things that once you get it seems obvious and intuitive, but it's not, and is quite difficult to explain properly.
So, instead of prescribing a general approach, let's look at:
If you repeat similar sets of functions, consider grouping them in a class.
That's it.
In its most basic form, a class is when you group data with functions that operate on that data; sometimes, there is no data, but it can still be useful to group the functions into an abstract object that exists only to make things easier to use / understand.
Depending on whether you choose which class to use at runtime, this is sometimes called the strategy pattern.
Note
As Wikipedia puts it, "A heuristic is a practical way to solve a problem. It is better than chance, but does not always work. A person develops a heuristic by using intelligence, experience, and common sense."
So, this is not the correct thing to do all the time, or even most of the time.
Instead, I hope that this and other heuristics can help build the right intuition for people on their way from "I know the class syntax, now what?" to "proper" object-oriented design.
My feed reader library retrieves and stores web feeds (Atom, RSS and so on).
Usually, feeds come from the internet, but you can also use local files. The parsers for various formats don't really care where a feed is coming from, so they always take an open file as input.
reader supports conditional requests – that is, only retrieve a feed if it changed. To do this, it stores the ETag HTTP header from a response, and passes it back as the If-None-Match header of the next request; if nothing changed, the server can respond with 304 Not Modified instead of sending back the full content.
Let's have a look at how the code to retrieve feeds evolved over time; this version omits a few details, but it will end up with a structure similar to that of the full version. In the beginning, there was a function – URL and old ETag in, file and new ETag out:
def retrieve(url, etag=None):
if any(url.startswith(p) for p in ('http://', 'https://')):
headers = {}
if etag:
headers['If-None-Match'] = etag
response = requests.get(url, headers=headers, stream=True)
response.raise_for_status()
if response.status_code == 304:
response.close()
return None, etag
etag = response.headers.get('ETag', etag)
response.raw.decode_content = True
return response.raw, etag
# fall back to file
path = extract_path(url)
return open(path, 'rb'), None
We use Requests to get HTTP URLs, and return the underlying file-like object.1
For local files, we suport both bare paths and file URIs; for the latter, we do a bit of validation – file:feed and file://localhost/feed are OK, but file://invalid/feed and unknown:feed2 are not:
def extract_path(url):
url_parsed = urllib.parse.urlparse(url)
if url_parsed.scheme == 'file':
if url_parsed.netloc not in ('', 'localhost'):
raise ValueError("unknown authority for file URI")
return urllib.request.url2pathname(url_parsed.path)
if url_parsed.scheme:
raise ValueError("unknown scheme for file URI")
# no scheme, treat as a path
return url
One of reader's goals is to be extensible. For example, it should be possible to add new feed sources like an FTP server (ftp://...) or Twitter without changing reader code; however, our current implementation makes it hard to do so.
We can fix this by extracting retrieval logic into separate functions, one per protocol:
def http_retriever(url, etag):
headers = {}
# ...
return response.raw, etag
def file_retriever(url, etag):
path = extract_path(url)
return open(path, 'rb'), None
...and then routing to the right one depending on the URL prefix:
# sorted by key length (longest first)
RETRIEVERS = {
'https://': http_retriever,
'http://': http_retriever,
# fall back to file
'': file_retriever,
}
def get_retriever(url):
for prefix, retriever in RETRIEVERS.items():
if url.lower().startswith(prefix.lower()):
return retriever
raise ValueError("no retriever for URL")
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever(url, etag)
Now, plugins can register retrievers by adding them to RETRIEVERS
(in practice, there's a method for that,
so users don't need to care about it staying sorted).
To add a feed, you call add_feed() with the feed URL.
But what if you pass an invalid URL? The feed gets stored in the database, and you get an "unknown scheme for file URI" error on the next update. However, this can be confusing – a good API should signal errors near the action that triggered them. This means add_feed() needs to validate the URL without actually retrieving it.
For HTTP, Requests can do the validation for us;
for files, we can call extract_path()
and ignore the result.
Of course, we should select the appropriate logic in the same way we select retrievers,
otherwise we're back where we started.
Now, there's more than one way of doing this. We could keep a separate validator registry, but that may accidentally become out of sync with the retriever one.
URL_VALIDATORS = {
'https://': http_url_validator,
'http://': http_url_validator,
'': file_url_validator,
}
Or, we could keep a (retriever, validator) pair in the retriever registry. This is better, but it's not all that readable (what if need to add a third thing?); also, it makes customizing behavior that affects both the retriever and validator harder.
RETRIEVERS = {
'https://': (http_retriever, http_url_validator),
'http://': (http_retriever, http_url_validator),
'': (file_retriever, file_url_validator),
}
Better yet, we can use a class to make the grouping explicit:
class HTTPRetriever:
def retrieve(self, url, etag):
headers = {}
# ...
return response.raw, etag
def validate_url(self, url):
session = requests.Session()
session.get_adapter(url)
session.prepare_request(requests.Request('GET', url))
class FileRetriever:
def retrieve(self, url, etag):
path = extract_path(url)
return open(path, 'rb'), None
def validate_url(self, url):
extract_path(url)
We then instantiate them,
and update retrieve()
to call the methods:
http_retriever = HTTPRetriever()
file_retriever = FileRetriever()
def retrieve(url, etag=None):
retriever = get_retriever(url)
return retriever.retrieve(url, etag)
validate_url()
works just the same:
def validate_url(url):
retriever = get_retriever(url)
retriever.validate_url(url)
And there you have it – if you repeat similar sets of functions, consider grouping them in a class.
Say you want to update feeds in parallel, using multiple threads.
Retrieving feeds is mostly waiting around for I/O, so it will benefit the most from it. Parsing, on the other hand, is pure Python, CPU bound code, so threads won't help due to the global interpreter lock.
However, because we're streaming the reponse body,
I/O is not done when the retriever returns the file,
but when the parser finishes reading it.3
We can move all the (network) I/O in retrieve()
by reading the response into a temporary file
and returning it instead.
We'll allow any retriever to opt into this behavior by using a class attribute:
class HTTPRetriever:
slow_to_read = True
class FileRetriever:
slow_to_read = False
If a retriever is slow to read, retrieve()
does the swap:
def retrieve(url, etag=None):
retriever = get_retriever(url)
file, etag = retriever.retrieve(url, etag)
if file and retriever.slow_to_read:
temp = tempfile.TemporaryFile()
shutil.copyfileobj(file, temp)
file.close()
temp.seek(0)
file = temp
return file, etag
The Flask web framework provides an extendable compact representation for non-standard JSON types called tagged JSON (code). The serializer class delegates most conversion work to methods of various JSONTag subclasses (one per supported type):
check()
checks if a Python value should be tagged by that tagtag()
converts it to tagged JSONto_python()
converts a JSON value back to Python
(the serializer uses the key
tag attribute to find the correct tag)Interestingly, tag instances have an attribute pointing back to the serializer, likely to allow recursion – when (un)packing a possibly nested collection, you need to recursively (un)pack its values. Passing the serializer to each method would have also worked, but when your functions take the same arguments...
OK, the retriever code works.
But, how should you communicate to others
(readers, implementers, interpreters, type checkers)
that an HTTPRetriever is the same kind of thing as a FileRetriever,
and as anything else that can go in RETRIEVERS
?
Here's the definition of duck typing:
A programming style which does not look at an object's type to determine if it has the right interface; instead, the method or attribute is simply called or used ("If it looks like a duck and quacks like a duck, it must be a duck.") [...]
This is what we're doing now! If it retrieves like a retriever and validates URLs like a retriever, then it's a retriever.
You see this all the time in Python. For example, json.dump() takes a file-like object; now, the full text file interface has lots methods and attributes, but dump() only cares about write(), and will accept any object implementing it:
>>> class MyFile:
... def write(self, s):
... print(f"writing: {s}")
...
>>> f = MyFile()
>>> json.dump({'one': 1}, f)
writing: {
writing: "one"
writing: :
writing: 1
writing: }
The main way to communicate this is through documentation:
Serialize obj [...] to fp (a
.write()
-supporting file-like object)
Nevertheless, you may want to be more explicit about the relationships between types. The easiest option is to use a base class, and require retrievers to inherit from it.
class Retriever:
slow_to_read = False
def retrieve(self, url, etag):
raise NotImplementedError
def validate_url(self, url):
raise NotImplementedError
This allows you to check you the type with isinstance(), provide default methods and attributes, and will help type checkers and autocompletion, at the expense of forcing a dependency on the base class.
>>> class MyRetriever(Retriever): pass
>>> retriever = MyRetriever()
>>> retriever.slow_to_read
False
>>> isinstance(retriever, Retriever)
True
What it won't do is check subclasses actually define the methods:
>>> retriever.validate_url('myurl')
Traceback (most recent call last):
...
NotImplementedError
This is where abstract base classes come in. The decorators in the abc module allow defining abstract methods that must be overriden:
class Retriever(ABC):
@abstractproperty
def slow_to_read(self):
return False
@abstractmethod
def retrieve(self, url, etag):
raise NotImplementedError
@abstractmethod
def validate_url(self, url):
raise NotImplementedError
This is checked at runtime (but only that methods and attributes are present, not their signatures or types):
>>> class MyRetriever(Retriever): pass
>>> MyRetriever()
Traceback (most recent call last):
...
TypeError: Can't instantiate abstract class MyRetriever with abstract methods retrieve, slow_to_read, validate_url
>>> class MyRetriever(Retriever):
... slow_to_read = False
... def retrieve(self, url, etag): ...
... def validate_url(self, url): ...
...
>>> MyRetriever()
<__main__.MyRetriever object at 0x1037aac50>
Tip
You can also use ABCs to register arbitrary types as "virtual subclasses"; this allows them to pass isinstance() checks without inheritance, but won't check for required methods:
>>> class MyRetriever: pass
>>> Retriever.register(MyRetriever)
<class '__main__.MyRetriever'>
>>> isinstance(MyRetriever(), Retriever)
True
Finally, we have protocols, aka structural subtyping, aka static duck typing. Introduced in PEP 544, they go in the opposite direction – what if instead declaring what the type of something is, we declare what methods it has to have to be of a specific type?
You define a protocol by inheriting typing.Protocol:
class Retriever(Protocol):
@property
def slow_to_read(self) -> bool:
...
def retrieve(self, url: str, etag: str | None) -> tuple[IO[bytes] | None, str | None]:
...
def validate_url(self, url: str) -> None:
...
...and then use it in type annotations:
def mount_retriever(prefix: str, retriever: Retriever) -> None:
raise NotImplementedError
Some other code (not necessarily yours, not necessarily aware the protocol even exists) defines an implementation:
class MyRetriever:
slow_to_read = False
def validate_url(self):
pass
...and then uses it with annotated code:
mount_retriever('my', MyRetriever())
A type checker like mypy will check if the provided instance conforms to the protocol – not only that methods exist, but that their signatures are correct too – all without the implementation having to declare anything.
$ mypy myproto.py
myproto.py:11: error: Argument 2 to "mount_retriever" has incompatible type "MyRetriever"; expected "Retriever" [arg-type]
myproto.py:11: note: "MyRetriever" is missing following "Retriever" protocol member:
myproto.py:11: note: retrieve
myproto.py:11: note: Following member(s) of "MyRetriever" have conflicts:
myproto.py:11: note: Expected:
myproto.py:11: note: def validate_url(self, url: str) -> None
myproto.py:11: note: Got:
myproto.py:11: note: def validate_url(self) -> Any
Found 1 error in 1 file (checked 1 source file)
Tip
If you decorate your protocol with runtime_checkable, you can use it in isinstance() checks, but like ABCs, it only checks methods are present.
If a class has no state and you don't need inheritance, you can use a module instead:
# module.py
slow_to_read = False
def retrieve(url, etag):
raise NotImplementedError
def validate_url(url):
raise NotImplementedError
From a duck typing perspective, this is a valid retriever, since it has all the expected methods and attributes. So much so, that it's also compatible with protocols:
import module
mount_retriever('mod', module)
$ mypy module.py
Success: no issues found in 1 source file
I tried to keep the retriever example stateless, but real world classes rarely are (it may be immutable state, but it's state nonetheless). Also, you're limited to exactly one implementation per module, which is usually too much like Java for my taste.
If you're doing something and you think you need a class, do it and see how it looks. If you think it's better, keep it, otherwise, revert the change. You can always switch in either direction later.
If you got it right the first time, great! If not, by having to fix it you'll learn something, and next time you'll know better.
Also, don't beat yourself up.
Sure, there are nice libraries out there that use classes in just the right way, after spending lots of time to find the right abstraction. But abstraction is difficult and time consuming, and in everyday code good enough is just that – good enough – you don't need to go to the extreme.
Learned something new today? Share this with others, it really helps!
This code has a potential bug: if we were using a persistent session instead of a transient one, the connection would never be released, since we're not closing the response after we're done with it. In the actual code, we're doing both, but the only way do so reliably is to return a context manager; I omitted this because it doesn't add anything to our discussion about classes. [return]
We're handling unknown URI schemes here because bare paths don't have a scheme, so anything that didn't match a known scheme must be a bare path. Also, on Windows (not supported yet), the drive letter in a path like c:\feed.xml is indistinguishable from a scheme. [return]
Unless the response is small enough to fit in the TCP receive buffer. [return]
Eli Bendersky 09/09/2023 | Source: Eli Bendersky's website
From its inception, the Web has been a game of whackamole between people finding security holes and exploits, and other people plugging these holes and adding defensive security mechanisms.
One of the busiest arenas in this struggle is the interaction between code running on one site (via JavaScript embedded in its page) and other sites; you may have heard about acronyms like XSS, CSRF, SSRF, SOP and CORS - they are all related to this dynamic and fascinating aspect of modern computer security. This post talks specifically about CORS, and what you should know if you're writing servers in Go.
Our story starts with the Same-origin policy (SOP) - a mechanism built into browsers that prevents arbitrary access from the site you're currently browsing to other sites. Suppose you're browsing https://catvideos.meow; while you're doing so, your browser will execute JS code from that site's pages.
JS can - among other things - fetch resources from other domains; this is commonly used for images, stats, ads, for loading other JS modules from CDNs and so on.
But it's also an inherently unsafe operation, because what if someone injects malicious code into catvideos.meow that sends requests to https://yourbank.com! Since the JS of catvideos.meow is executed by your browser, this is akin to you opening a new browser window and visiting https://yourbank.com, including providing any log-in information and cookies that may already be saved in your browser's session. That doesn't sound very safe!
This is what the SOP was designed to prevent; generally speaking, except for a limited set of "safe" (but mostly there for historical reasons) use cases like fetching images, embedding and submitting a limited set of forms, JS is not allowed to make cross-origin requests.
A request is considered cross-origin if it's made from origin A to origin B, and any of the following differ between the origins: protocol, domain and port (a default port is assumed per protocol, if not explicitly provided):
If the protocol, domain and port match, the request is valid - the path doesn't matter. Naturally, this is used all the time by JS loading other resources from its own domain.
Let's try a simple experiment to see how this browser protection works; this only requires a couple of small HTML files with a bit of JS. Place two HTML files in the same directory; one should be named page.html and its contents don't matter. The other should be named do-fetch.html, with these contents:
<html>
<head>
<title>Fetch another page</title>
</head>
<body>
<script>
var url = 'http://127.0.0.1:8080/page.html'
fetch(url)
.then(response => {
console.log(response.status);
})
.catch(error => {
console.log("ERROR:", error);
});
</script>
</body>
</html>
It attempts to load page.html from a URL (which points to a local machine's port) via the fetch() API.
First experiment: run a local static file server in the directory containing these two HTML files. Feel free to use my static-server project, but any server will do [1]:
$ go install https://github.com/eliben/static-server@latest
$ ls
do-fetch.html page.html
$ static-server -port 8080 .
2023/09/03 06:02:10.111818 Serving directory "." on http://127.0.0.1:8080
This serves our two HTML files on local port 8080. Now we can point our browser to http://127.0.0.1:8080/do-fetch.html and open the browser console. There shouldn't be errors, and we should see the printout 200, which is the successful HTTP response from attempting to load page.html. It succeeds because this is a same-origin fetch, from http://127.0.0.1:8080 to itself.
Second experiment: while the static server on port 8080 is still running, run another instance of the server, serving the same directory on a different port - you'll want to do this in a separate terminal:
$ ls
do-fetch.html page.html
$ static-server -port 9999 .
2023/09/03 06:12:19.742790 Serving directory "." on http://127.0.0.1:9999
Now, let's point the browser to http://127.0.0.1:9999/do-fetch.html and open the browser console again. The page won't load, and instead you'll see an error similar to:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
remote resource at http://127.0.0.1:8080/page.html. (Reason: CORS header
‘Access-Control-Allow-Origin’ missing).
This is the SOP in action. Here's what's going on:
Note that the browser also mentions a CORS header, which is a great segue to our next topic.
So what is CORS, and how can it help us make requests to different origins? The CORS acronym stands for Cross-Origin Resource Sharing, and this is a good definition from MDN:
Cross-Origin Resource Sharing (CORS) is an HTTP-header based mechanism that allows a server to indicate any origins (domain, scheme, or port) other than its own from which a browser should permit loading resources.
CORS is a simple protocol between an HTTP server and a browser. When a page attempts to make a cross-origin request, the browser attaches a special header to the request with the name Origin; in this header, the browser specifies the origin from which the request originates.
We can actually observe this if we look at the debug console of the browser in more detail in our SOP experiment. In the Network tab, we can examine the exact HTTP request made by the browser to fetch the page from http://127.0.0.1:8080/page.html when do-fetch.html asked for it. We should see something like:
GET /page.html HTTP/1.1
Host: 127.0.0.1:8080
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://127.0.0.1:9999/
Origin: http://127.0.0.1:9999
The important line here is the last one: it tells the server which origin the request is coming from.
We can also examine the server's response, in which we'll see that the server does not include a special header named Access-Control-Allow-Origin. Since this header is not in the response, the browser assumes that the server doesn't support CORS from the specified origin, and this results in the error we've seen above.
To complete a successful cross-origin request, the server has to approve the request explicitly by returning an Access-Control-Allow-Origin header. The value of the header should be either the origin named in the request's Origin header, or the special value * which means "all origins accepted".
To see this in action, it's time for another experiment; let's write a simple Go server that supports cross-origin requests.
Leaving static file serving behind, let's move closer towards what CORS is actually used for: protecting access to APIs from unknown origins. Here's a simple Go server that serves a very basic API endpoint at /api, returning a hard-coded JSON value:
func apiHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
fmt.Fprintln(w, `{"message": "hello"}`)
}
func main() {
port := ":8080"
mux := http.NewServeMux()
mux.HandleFunc("/api", apiHandler)
http.ListenAndServe(port, mux)
}
This server should be started locally; Here's a somewhat modified HTML file with JS making a CORS request to this endpoint, assuming the server runs on local port 8080:
<html>
<head>
<title>Access API through CORS</title>
</head>
<body>
<script>
var url = 'http://localhost:8080/api'
fetch(url)
.then(response => {
if (response.ok) {
return response.json();
} else {
throw new Error('Failed to fetch data');
}
})
.then(data => {
document.writeln(data.message);
})
.catch(error => {
document.writeln("ERROR: ", error);
});
</script>
</body>
</html>
Assuming this code is saved locally in access-through-cors.html, we will serve it with static-server on port 9999, as before:
$ static-server -port 9999 .
2023/09/03 08:01:22.413757 Serving directory "." on http://127.0.0.1:9999
When we open http://127.0.0.1:9999/access-through-cors.html in the browser, we'll see the CORS error again:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
remote resource at http://127.0.0.1:8080/api. (Reason: CORS header
‘Access-Control-Allow-Origin’ missing).
Indeed, our server doesn't support CORS yet! This is an important point to emphasize - a server oblivious to CORS means it doesn't support it. In other words, CORS is "opt-in". Since our server doesn't check for the Origin header and doesn't return the expected CORS headers back to the client, the browser assumes that the cross-origin request is denied, and returns an error to the HTML page [2].
Let's fix that, and implement CORS in our server. It's customary to do it as middleware that wraps the HTTP handler. Here's a simple approach:
var originAllowlist = []string{
"http://127.0.0.1:9999",
"http://cats.com",
"http://safe.frontend.net",
}
func checkCORS(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
origin := r.Header.Get("Origin")
if slices.Contains(originAllowlist, origin) {
w.Header().Set("Access-Control-Allow-Origin", origin)
w.Header().Add("Vary", "Origin")
}
next.ServeHTTP(w, r)
})
}
checkCORS is standard Go middleware. It wraps any HTTP handler and adds CORS logic on top; here's how it works:
Obviously, the allow-list solution presented here is ad-hoc, and you are free to implement your own. Some API endpoints want to be truly public and support cross-origin requests from any domain. In such cases, one can just hard-code Access-Control-Allow-Origin: * in all responses, without additional logic. In this case the Vary header isn't required either.
Now that we have the middleware in place, we have to hook it into our server; let's wrap the top-level router, so checkCORS applies to all endpoints we may add to the server in the future:
func main() {
port := ":8080"
mux := http.NewServeMux()
mux.HandleFunc("/api", apiHandler)
http.ListenAndServe(port, checkCORS(mux))
}
If we kill the old server occupying port 8080 and run this one instead, re-loading access-through-cors.html we'll see different results: the page shows "hello" and there are no errors in the console. The CORS request succeeded! Let's examine the response headers:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: http://127.0.0.1:9999
Content-Type: application/json
Vary: Origin
Date: Sun, 03 Sep 2023 16:33:00 GMT
Content-Length: 21
The custom headers set by our middleware are highlighted; the request was made by a page served on local port 9999, and this is in the Origin header sent by the browser. Therefore, our response headers permit the browser to communicate the data back to the client code and finish without errors. As an exercise, modify the code of our CORS middleware to set * instead of a specific origin, then re-run the server and client page, and examine the response header again.
As we've seen, when a page issues a cross-origin request, the browser obliges, but withholds any response details from the fetching code unless the server explicitly agreed to receive the request via CORS. This can be worrisome, though; what if the request itself causes something unsafe to happen on the server?
This is what preflight requests are for; for some HTTP requests that aren't deemed inherently safe, a browser will first send a special OPTIONS request (called "preflight") to double check that the server is ready for this kind of request from the specific origin. Only if answered in the affirmative, the browser will then send the actual HTTP request.
The terminology here gets a bit confusing. The old CORS standard defines simple requests as those that don't require preflight, but the new fetch standard that defines CORS doesn't use this term. Generally, GET, HEAD and POST requests restricted to certain headers and content types are considered simple; for the full definition, see the linked standards. Anything that isn't simple requires a preflight [3].
The protocol goes as follows:
There's another feature of preflight requests which I'm not going to cover in detail here, but it's easy enough to implement if needed: permissions for special headers. Preflight requests not only protect servers from potentially unsafe methods, but also from potentially unsafe headers. If the client tries to send a cross-origin request with such headers, the browser will send a preflight with the Access-Control-Request-Headers header listing these headers; the server has to reply with Access-Control-Allow-Headers in order for the protocol to succeed.
Before working on the server's code, let's see how the browser sends preflight requests on behalf of a fetch call. We'll update the JS code in our HTML page just a bit:
var url = 'http://localhost:8080/api'
fetch(url, {method: 'DELETE'})
.then(response => {
if (response.ok) {
return response.json();
} else {
throw new Error('Failed to fetch data');
}
})
.then(data => {
document.writeln(data.message);
})
.catch(error => {
document.writeln("ERROR: ", error);
});
With the old CORS server (that doesn't support preflight requests yet) still running on port 8080, when we open this page in the browser served at 127.0.0.1:9999, we'll see an error:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
remote resource at http://localhost:8080/api. (Reason: Did not find method in
CORS header ‘Access-Control-Allow-Methods’).
Diving deeper, we find that the browser sent an OPTIONS request to the server with the following relevant headers:
Access-Control-Request-Method: DELETE
Origin: http://127.0.0.1:9999
This means "hey server, some code at origin 127.0.0.1:9999 wants to send you a DELETE request, are you cool with that?"
Did our server reply? Yes, with the same response it sent for the GET request in the previous example:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: http://127.0.0.1:9999
Content-Type: application/json
Vary: Origin
Date: Sun, 03 Sep 2023 21:34:03 GMT
Content-Length: 21
That's because we haven't actually restricted the method in our Go server: it answers the same response to all methods - in this case OPTIONS! Since the browser sent our server a preflight for DELETE, it expected the server to reply with Access-Control-Allow-Methods that lists DELETE. The server didn't, so the browser aborted the procedure and returned an error to the client (without actually sending the DELETE request itself).
Let's now fix that, by implementing preflight in our server. We'll start with a helper function that reports whether the given request is a preflight request:
func isPreflight(r *http.Request) bool {
return r.Method == "OPTIONS" &&
r.Header.Get("Origin") != "" &&
r.Header.Get("Access-Control-Request-Method") != ""
}
It's important to note that all three conditions have to be true for the request to be considered preflight. Next, we'll modify our checkCORS middleware to support preflights:
var originAllowlist = []string{
"http://127.0.0.1:9999",
"http://cats.com",
"http://safe.frontend.net",
}
var methodAllowlist = []string{"GET", "POST", "DELETE", "OPTIONS"}
func checkCORS(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if isPreflight(r) {
origin := r.Header.Get("Origin")
method := r.Header.Get("Access-Control-Request-Method")
if slices.Contains(originAllowlist, origin) && slices.Contains(methodAllowlist, method) {
w.Header().Set("Access-Control-Allow-Origin", origin)
w.Header().Set("Access-Control-Allow-Methods", strings.Join(methodAllowlist, ", "))
w.Header().Add("Vary", "Origin")
}
} else {
// Not a preflight: regular request.
origin := r.Header.Get("Origin")
if slices.Contains(originAllowlist, origin) {
w.Header().Set("Access-Control-Allow-Origin", origin)
w.Header().Add("Vary", "Origin")
}
}
next.ServeHTTP(w, r)
})
}
If we run this updated server on port 8080 and invoke the HTML page that does a fetch with method: 'DELETE' again, the request will be successful. The server now has a tailored reply for the OPTIONS preflight request:
HTTP/1.1 200 OK
Access-Control-Allow-Methods: GET, POST, DELETE, OPTIONS
Access-Control-Allow-Origin: http://127.0.0.1:9999
Content-Type: application/json
Vary: Origin
Date: Sun, 03 Sep 2023 13:12:29 GMT
Content-Length: 21
The browser then proceeds to send the DELETE request itself:
DELETE /api HTTP/1.1
Host: localhost:8080
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/117.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate, br
Referer: http://127.0.0.1:9999/
Origin: http://127.0.0.1:9999
Which gets a successful reply:
HTTP/1.1 200 OK
Access-Control-Allow-Origin: http://127.0.0.1:9999
Content-Type: application/json
Vary: Origin
Date: Sun, 03 Sep 2023 13:12:29 GMT
Content-Length: 21
At the beginning of the post we discussed how sending cookies on behalf of the visiting browser is one of the main security issues the SOP and CORS try to address. Now it's time to discuss this in more detail.
Let's go back to our server and have it set a cookie when a certain path is accessed. Our main function becomes:
func main() {
port := ":8080"
mux := http.NewServeMux()
mux.HandleFunc("/api", apiHandler)
mux.HandleFunc("/getcookie", getCookieHandler)
http.ListenAndServe(port, checkCORS(mux))
}
And getCookieHandler is:
func getCookieHandler(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Set-Cookie", "somekey=somevalue")
fmt.Fprintln(w, `{"message": "you're welcome"}`)
}
Very simple: everyone visiting the /getcookie route gets a cookie! If we run this server on port 8080 as usual and visit http://http://127.0.0.1:8080/getcookie, we should see the cookie sent in the response header:
HTTP/1.1 200 OK
Set-Cookie: somekey=somevalue
Date: Sun, 03 Sep 2023 13:25:09 GMT
Content-Length: 30
Content-Type: text/plain; charset=utf-8
Note: this isn't a CORS request; this is the browser accessing the server directly. Opening the developer console ("Storage" tab), we should be able to see this cookie is now associated with 127.0.0.1:8080, something like:
If we refresh the page, we'll notice that the browser now sends a Cookie header with this cookie in requests to 127.0.0.1:8080 - as expected!
Next, let's try to access /api again from our HTML page served on a different origin (port 9999):
<html>
<head>
<title>CORS with credentials</title>
</head>
<body>
<script>
var url = 'http://localhost:8080/api'
fetch(url, {credentials: "include"})
.then(response => {
if (response.ok) {
return response.json();
} else {
throw new Error('Failed to fetch data');
}
})
.then(data => {
document.writeln(data.message);
})
.catch(error => {
document.writeln("ERROR: ", error);
});
</script>
</body>
</html>
This is where things get interesting; our browser has a cookie associated with 127.0.0.1:8080, and now a different origin makes a request to this domain inside our browser.
fetch won't set cookies by default, and it needs to be told to do so explicitly (this is yet another security mechanism). The highlighted line shows how to do this, by adding a credentials options set to true. When we serve this page on http://127.0.0.1:9999/getcookie.html, we'll see that the cookie is sent in the request with this header:
Cookie: somekey=somevalue
But there's a CORS error in the console, and the browser returns an error to fetch:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the
remote resource at http://localhost:8080/api. (Reason: expected ‘true’ in CORS
header ‘Access-Control-Allow-Credentials’).
This is because our server doesn't support credentials for CORS yet! As the error suggests, to signal that credentials are supported, the server has to set a special header named Access-Control-Allow-Credentials to true:
w.Header().Set("Access-Control-Allow-Credentials", "true")
If we rerun the server with this header set, the CORS request with the cookie succeeds.
Once again, note that for "simple" requests the browser does send the request with the cookie to the server; it just refuses to get any reply back to the fetch unless the server explicitly accepts credentials over CORS by returning a special header. It's the server's job to ensure that nothing unsafe happens as a result of an unauthorized cross-origin request. For non-simple methods, the browser will expect Access-Control-Allow-Credentials to be set on the response to a preflight request, and the actual request won't have the cookie unless this condition is unsatisfied.
This post is an introduction to CORS for Go programmers. It doesn't cover all the aspects and details of CORS, but should be a good foundation for finding out more, if desired. For additional resources:
Finally, it's unlikely that you'll have to roll your own CORS implementation. Popular Go web frameworks like Gin and Echo have CORS middleware built-in, and projects like rs/cors provide a framework-agnostic solution.
All the Go and HTML code for this post's samples and experiments is available on GitHub.
[1] | When doing experiments involving fetch and other non-trivial JS, it's strongly recommended to actually serve the HTML files locally, rather than just opening them with file:/// in the browser. Specifically for CORS, file:/// has some additional nuances (e.g. the Origin header is set to null). Note also that having different ports on 127.0.0.1 is sufficient to demonstrate the topics of this post, because ports count in the definition of "origin". An alternative is to use the system's /etc/hosts configuration file to define domain aliases for 127.0.0.1, and run our static server with sudo to enable serving on port 80. This provides a slightly more realistic emulation, like accessing http://foo.domain from http://bar.domain, since the browser is oblivious to domain aliases (it will even consider localhost and 127.0.0.1 distinct for the purposes of CORS). You're free to do so as an exercise, but having different ports to represents different origins is generally sufficient for our needs. |
[2] | Note that our Go server still returns a valid JSON response on the /api endpoint, and the browser gets this response back. However, the browser won't share it with the client fetch() call, reporting an error instead. In fact, if we just curl to http://127.0.0.1:8080/api while the server is running, we'll get the data back. The CORS mechanism is a browser feature, not part of the actual HTTP protocol. This highlights a very important point: while CORS is part of a security solution, it's absolutely unsuitable as the main (or only) security mechanism. If you expose an API endpoint on the public internet, clients will be able to access it. Browsers will block cross-origin requests from client-side JavaScript, but that's about it. If you're not actually interested in your endpoint being public, you should use a real authentication solution. And if your server will dutifully execute a DELETE request from any client on the internet and destroy critical records - you're going to have a bad time. Don't forget that HTTP is stateless, and the client is not required to send you a preflight request before a DELETE; as a matter of fact, all these requests can be easily spoofed using non-browser clients. |
[3] | You may wonder why POST is considered to be safe; unfortunately, it's not a good technical reason but rather backward-compatibility. Forms do submits via POST and this is something that worked historically, so CORS couldn't interfere with that. In all fairness, it's a best practice to use CSRF protection in forms anyway, so there's already a security mechanism applied. |
Augusto Campos 08/09/2023 | Source: TRILUX
💰 Começar a fazer controle financeiro doméstico é muito mais difícil quando a gente percebe essa necessidade em um momento em que a grana já está faltando - mas vale o esforço, e eu aprendi na prática algumas maneiras de fazer dar certo.
🕰 A maior causa de dar errado é achar que orçamento doméstico é saber para onde o dinheiro foi. É o contrário disso: é escolher, antes, para onde o dinheiro vai. O objetivo, geralmente, é decidir como gastar bem (e conseguir cumprir), se possível gerando saldo positivo.
💸 Isso não quer dizer que não precisa olhar para onde o dinheiro vai. Precisa sim, e da forma mais ampla e realista. Se você está gastando diferente do planejado, isso tem que aparecer nos registros, para que você saiba que precisa ajustar, e onde.
💶 Outro equívoco comum é confundir a visão financeira com a econômica. Controle financeiro é olhar para o hoje - $$ que entra e $$ que sai. Quando a situação permite, haverá saldo e oportunidade para a visão econômica (investir, ou quitar dívidas na melhor ordem)
Quanto menor o espaço entre a sua renda e os seus gastos inevitáveis, mais estrito, inflexível e chato 🙅️ ficará seu plano. Não desanime: no começo, concentre-se em tentar fazer um plano que você não queira trapacear (emergências não são trapaça!)
🏦 Um primeiro passo pode ser olhar os extratos dos últimos 2 ou 3 meses, para ter uma visão do que se repete e do que varia. A partir daí, defina como passar a medir – com realismo. Talvez precise passar a tomar notas, ou escolher um dos vários bons apps.
📈 Depois dessa visão inicial, faça o seu primeiro plano semanal ou mensal de gastos (conforme a frequência em que você recebe sua renda), e o acompanhe. A tendência é o primeiro plano ser descumprido. Vá ajustando: é um processo de maturidade e envolve motivação.
🍔 Há lugar para otimismo, mas não para fantasia. Se você vai manter alguma despesa que sabe que poderia evitar, inclua-a no plano, não a transforme em razão para trapacear. O controle ajuda a permitir que você comece a gastar melhor, mesmo assim.
🍀 O melhor momento para começar a controlar já passou, e o segundo melhor é hoje. De modo geral, quem passa a medir, controlar e fazer os ajustes necessários ao longo do tempo tem resultado melhor, mesmo se começar tarde. Não desanime, e boa sorte! 9/9
O artigo "Controle financeiro pessoal e doméstico: como colocar em prática" foi originalmente publicado no site TRILUX, de Augusto Campos.
Anonymous 06/09/2023 | Source: KIROKAZE
Free pixel portraits for patrons made as a challenge for August.
Join my Patreon for more free stuff.
Vadim Kravcenko 05/09/2023 | Source: Vadim Kravcenko
In a quaint bar on the outskirts of Catania (Italy), as whiskey glasses clinked and muted conversations blended into a harmonic background hum, an old-timer once told me, “The best drink isn’t the newest bottle on the shelf; it’s the one that’s aged just right.” Now, while he was probably quite drunk and didn’t speak a word in English, as well as this being a fictional story, I couldn’t help but draw a parallel to our world of incessant coding and technological innovations. Our constant need to rewrite.
There’s this great concept in budgeting — Aging Money, which is a solution that helps you build a solid financial foundation and to absolve yourself from living paycheck-to-paycheck:
A dollar is born the day it arrives in your life. Let’s say you’re on your way to work Friday morning. You can’t afford to put gas in the car, but you get paid later today, so you’ll do it on the way home. You get paid, you cash the check, and then fill the tank. When you buy that gas, you’re spending money that’s barely 15 minutes old. It barely arrived in your world, and it’s headed right back out the door. This immediately creates uncertainty. You want to get to the point where money hangs around for a while before heading back out the door.
YNAB
Let’s break it down a bit. To simplify — the idea is that you spend the oldest money in your account first. This system, counter-intuitive as it may sound in our instant-gratification culture, is a cornerstone of sound financial stability. It gives you a buffer for emergencies, it smooths out your cash flow, and it provides you stability in the form of a nest egg for investments. In this case Old Money > New Money.
I’d like to take this concept one step further — old is better.
Of course, the allure of the new is intoxicating. It beckons with the promise of exciting possibilities and the thrill of being on the cutting edge. Who doesn’t want to implement the newest framework, consume the latest white noise on Twitter (Or X, or Whatever), or try out that new GPT LLM? But, as with any intoxication, there’s a hangover waiting on the other side. In our chase for the brand new, we often overlook the value of what’s stood the test of time.
It’s similar with news. God knows every day there’s a fresh hell or wonder being reported. But here’s a secret: the really important stuff? That has a longer shelf-life. There’s so much happening every day, that the things that matter to you — those things that you will remember in 5 years — will still be relevant tomorrow, or next week. You have enough time to read it, digest it, and ponder on it. It won’t spoil like milk; in fact, it’ll age like wine, providing new nuances and complexities as time progresses and more context comes to light.
The same applies to code. New libraries. New languages. New Frameworks. New Intern coming in and thinking he can rewrite better parts of the code himself. It’s easy to get swept away. But is the newest framework always the best choice? Is a rewrite really going to make everything better? Or is there wisdom in the code that has been around for years, has been tested with crazy edge cases, and has evolved together with the business?
As we dive into the world of IT, where systems seem to age faster than the Sicilian wine in my room, we’ll explore the beauty and benefit of mature codebases, and why sometimes it’s best to let codebases stay as they are and just.. slow down.
Why should you consider aging your code? Because the longer your code has been around, survived different cataclysms (read: business pivots), and evolved, the more robust it is. The team that has built it before you had time to debug, to optimize, to improve — the code has accumulated years worth of bugfixes that are in places you cant even imagine.
The kinks have been worked out, and what you’re left with is a mature, stable system that can handle whatever comes its way. (Or maybe a big pile of technical debt, which we’ll talk about later).
You see, in this fast-paced world of constant refactoring, there's something to be said for stability. Aging your code isn't about resisting progress; it's about ensuring that when progress happens, it's built on a rock-solid foundation.
You remember that eager intern who joined your team and wanted to change the world on day one? So full of ideas and yet so naive about the complexity of the existing systems. He’s the embodiment of the new framework that caught your eye, promising a whole new world. So you rewrite pieces of code, integrate it, and just like the intern who quickly finds out that corporate business rules are not as simple as they seem, the new code soon learns that fitting in isn’t that easy.
Every time you add a new library, it’s like adding an extra room to a house. But what happens when the room you add doesn’t quite match the existing architectural plan? What if the new room demands more power than your electrical system can handle or affects the foundational structure of the house itself?
Joel Spolsky writes in his essay “Things You Should Never Do“:
There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
It’s harder to read code than to write it.
This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.
That’s what happens to your IT system when you toss in a new library or start rewriting without a second thought. You’re not just pulling another block from your Jenga tower; you’re deviating from the original design as well as years of historical knowledge that grew around it.
As you stray further from the original design with mixing different frameworks and random acts of integration, something insidious happens: you begin to lose the conceptual integrity of your whole system. Remember that moment when you had to write a 10-page manual just to explain how to run a straightforward function? That’s a symptom. Your codebase has become the equivalent of an over-decorated Christmas tree, so loaded with ornaments that you can’t even see the branches.
Loss of system integrity is not just a fancy phrase; it’s the fast track to Complexity Hell. And Complexity Hell is not a nightclub; it’s a Dantean inferno where you spend days debugging code written in several different ways, weeks reading Github Issues of different frameworks, and months trying to integrate new features in all the right places that should’ve taken hours.
Enjoyed the read? Subscribe to read more articles from me.
Every new rewrite of some piece of stable code not only adds to complexity but also brings its own maintenance costs. Sure, that library looked great in the demo, but now you have to keep it updated, make sure it’s compatible with the rest of your system, and oh god, did it just break the build? What was supposed to be a quick fix becomes a long-term liability.
And let’s not forget about adaptability. Remember how easy it was to add new features when your codebase was just tens of thousands of lines? Those were the days, right? The more you stray from your original architecture, towards a patchwork of different “shiny new things” (that overtime became dull), the harder it becomes to adapt. You find yourself navigating a maze of dependencies, conditional statements, and weird bugs that have no business being there.
If adaptability is the currency of the modern tech world, then a complex, bloated codebase is like having your assets frozen. You can’t move; you’re stuck. And while you’re standing there, frozen in the headlights of complexity, the world moves on and opportunities pass you by.
So before you get seduced by that desire to rewrite everything with a new framework that promises you “Increased X” where X is usually one of “Innovation, Performance or Flexibility”, take a step back. Have a hard look at your reliable codebase. It may not have the glitter of newness, but it has the glow of maturity. Remember, you’re not just writing code; you’re building a legacy for future developers. And legacies aren’t built on fads; they’re built on foundations. Foundations that can withstand the test of time, the whims of the market, and yes, even the allure of the new.
Take a really hard look at the situation:
Just so we’re on the same page — aging code isn't about resisting change or sticking to your guns while the world moves on. It's about recognizing that new isn't always better and old isn't always obsolete. It's about understanding that foundational strength isn't the antithesis of innovation, but its prerequisite.
In this regard, it’s also crucial to acknowledge that there are scenarios where opting for new technology or a major rewrite is not only warranted but essential.
Here are a few scenarios to consider:
Technological Advancements: One of the driving forces in the tech industry is innovation. New frameworks, languages, and tools are developed to leverage new hardware capabilities. Ignoring these innovations completely can lead to missed opportunities. It’s essential to evaluate whether a new technology aligns with your goals and offers significant advantages over the existing stack.
Technical Debt Overload: While some technical debt is manageable and can be strategically addressed, there comes a point where an accumulation of technical debt becomes overwhelming. If your codebase is riddled with complex workarounds and patch upon patch, even if it works — it might be time for a refactor to regain maintainability for future business needs.
Changing Business Requirements: The business landscape is dynamic, and sometimes, old code may no longer align with evolving market needs. If your current technology stack restricts your ability to respond quickly to customer demands, it may be worth considering rewriting in some framework that allows you to do that.
But coming back to “your mileage may vary”.
Each situation is unique, and the decision to refactor, rewrite, or adopt new technology should be based on a careful assessment of your specific circumstances, business objectives, budgets, team expertise, and technical considerations. The key is to find the sweet spot where mature systems and innovative technology can coexist harmoniously.
Other Newsletter Issues:
The post Aging Code appeared first on Vadim Kravcenko.
Yegor Bugayenko 05/09/2023 | Source: Yegor Bugayenko
Almost every document you may write in
LaTeX format will
have a list of references at the end. Most likely, you will
use BibTeX
or BibLaTeX
to print this list of references in a nicely formatted way.
It is also highly probable that your .bib
file will contain
many typographic, stylistic, and logical mistakes. I’m fairly
certain that you won’t find the time to identify and correct them.
As a result, the “References” section in your paper may appear sloppy.
I suggest using the bibcop
package, which identifies mistakes in the .bib
file
and auto-fixes some of them.
Here is a practical example. Let’s say, you want to cite a famous paper about transformers. First, you find it in Google Scholar and click “Cite”:
Then, you put this “bib” item into your main.bib
file:
@article{vaswani2017attention,
title={Attention is all you need},
author={Vaswani, Ashish and Shazeer, Noam and
Parmar, Niki and Uszkoreit, Jakob and Jones, Llion and
Gomez, Aidan N and Kaiser, {\L}ukasz and Polosukhin, Illia},
journal={Advances in neural information processing systems},
volume={30},
year={2017}
}
Then, you write something like this in your paper:
\documentclass{article}
\usepackage[maxbibnames=9]{biblatex}
\addbibresource{main.bib}
\begin{document}
Transformers~\cite{vaswani2017attention}
changed everything!
\printbibliography
\end{document}
This is what you will get:
Looks more or less fine. However, if you go to the website of the publisher of this article, you will see that:
In other words, Google Scholar gave you the citation with a few typographic mistakes. While not fatal, the quality of the “References” section can sometimes be seen as reflective of the quality of the paper as a whole. Simply put, negligence is not forgivable when dealing with information about other authors. We must be accurate down to every letter and every dot.
By including bibcop
package to the document, the problem may be solved.
First, you install it (I assume, you are using TeX Live):
$ sudo tlmgr install bibcop
Then, you add this to your document, right before the \addbibresource
command:
...
\usepackage{bibcop}
\addbibresource{main.bib}
...
When you compile the document, the following warnings will be printed to the console, together with other logs:
Package bibcop Warning: A shortened name must have
a tailing dot in the 6th 'author', as in 'Knuth, Donald E.',
in the 'vaswani2017attention' entry.
Package bibcop Warning: All major words in the 'title'
must be capitalized, while the 2nd word 'is' is not,
in the 'vaswani2017attention' entry.
Package bibcop Warning: A mandatory 'doi' tag for '@article'
is missing among (author, journal, title, volume, year),
in the 'vaswani2017attention' entry.
Package bibcop Warning: The 'title' must be wrapped
in double curled brackets,
in the 'vaswani2017attention' entry.
You fix them all in the main.bib
file and recompile the document:
This one looks much better to me (especially with the DOI, which was not provided by Google Scholar).
By the way, some formatting problems may be auto-fixed by bibcop.
You can use it from the command line, assuming you have your
main.bib
file in the current directory:
$ bibcop --fix --in-place main.bib
This command will make as many fixes as possible.
Then, you can run bibcop
again, from the command line,
in order to check what style violations are still there:
$ bibcop main.bib
This will print the same errors as you saw earlier in the LaTeX log.
In CTAN, you can find full PDF documentation.
You are welcome to suggest additional style checkers, via GitHub issues.
Anonymous 04/09/2023 | Source: Irrational Exuberance
Back in late April, I mentioned that I was working on a new book, The Engineering Executive’s Primer, with O’Reilly. I wanted to share a few notes on progress!
First, there’s a cover, shown above in this post’s image, and also in the right rail (or bottom footer if you’re reading on a smaller device). I’m quite excited about the cover, which is simple and imperfect. There is nothing pure about being an executive; it’s mostly about balancing opposing forces to the best of your ability, and I think the cover captures some of that. The map underneath the cracks is an early map of San Francisco’s Golden Gate Park (if you want further proof, try searching for “Stow Lake” whose label you can see peeking through in the crack on the right side).
Second, I’ve done a lot of writing. I’ve been sharing early chapters with the “executive” tag, which now has 28 posts, all except one of which are from this year. Every one of those is an idea that I intended for the book. Some will be in the book exactly as is (well, almost exactly, they all still need some editing), others have been trimmed down to asides to include within other chapters, and just a couple of them didn’t end up fitting (e.g. the post on creating executive LinkedIn profiles was top of mind for me as I was reworking mine for the job search that helped me connect with Carta, but there’s no advice I can write about any tool that’s truly evergreen advice–tools change too often).
At this point, I am nominally done writing, although what I really mean is that I’ve finished the first draft. There’s still quite a bit of editing, including incorporating feedback from an amazing group of tech reviewers (ty Jasmine, Julia, Kevin, Tanya, Uma, and Virginia), which I hope to finish over the course of September.
From there, there’s copy editing, perparing the book for printing, actually printing the book, and so on, but most of that won’t require much direct involvement from me. That means, we should be on track for the digital version being complete by the end of this year, and the physical release by June, 2024.
This is my third book, and I’d say that I have a pretty clear sense of how to write this sort of book, so it hasn’t been a particularly tortured experience pulling it together. It certainly helped that I had a couple months winding down at Calm before starting at Carta, which gave some space to focus on outlining and writing the book. I’m pretty sure I couldn’t have written this while ramping up at a new job if so much of it hadn’t already been pulled together. In particular, the chapters that I think are exceptionally good were all written by the time I started, including Writing an engineering strategy, which I hope will be the enduring piece from this book. (Perhaps that’s wishful thinking, as it’s a topic I’ve been trying to land for a long time now.)
Alright, now I’m off to edit, prepare for a talk on engineering strategy at QCon San Francisco in October, and continue my work at Carta.
Anonymous 03/09/2023 | Source: Irrational Exuberance
Uber’s original performance process was called “T3B3” and was remarkably simple: write the individuals top 3 strengths, and top 3 weaknesses, and share the feedback with them directly in person. There was a prolonged fight against even documenting the feedback, which was viewed as discouraging honesty. On the other side of things, there are numerous stories of spending months crafting Google promotion packets that still don’t get their authors promoted. Among those who’ve worked within both Uber and Google’s promotion processes, there are advocates and detractors, and absolutely no consensus on what an ideal performance process looks like.
Compensation is a subtly different set of problems, but similarly there are no universally appreciated compensation processes out there. Highly structured, centrally orchestrated compensation systems often converge on most folks at a given level receiving similar compensation, even if their impact is quite different. More dynamic compensation systems disproportionately reward top performers, which introduces room for bias.
Because there’s no agreement on what performance or compensation process you should use, you’ll likely end up working within a variety of systems. This post digs into:
Every one of these systems is loaded with tradeoffs and traps that you’ll need to be wary of, and after finishing this post, you should be prepared to plot the right course for your organization through them.
This is an unedited chapter from O’Reilly’s The Engineering Executive’s Primer.
Going back to Uber’s T3B3 performance process–where you told someone their top and bottom three areas for a given half–what’s most remarkable was its radical simplicity. It was focused exclusively on providing useful feedback to the recipient. To this day, I find that clarity of purpose very remarkable, and genuinely rare.
Most performance and compensation systems have far less clarity of purpose, because they try to balance many priorities from many stakeholders. Your typical process at a given company is trying to balance all of these goals:
I’ve never encountered, or heard of, a process that solves all these problems elegantly. My informed guess is that there simply isn’t any process that works with hundreds of people that isn’t a bit laborious to operate within. There’s also no way to flawlessly balance the goals of objective, consistent outcomes and recognizing exceptional individuals.
There’s a lot of room for improvement in these processes, and they can absolutely always be improved, but the tension in these process is inherent to the participants’ conflicting goals. These conflicting goals are real, fundamental, unavoidable, and must be kept in mind as you make decisions about how your process works.
I’ll start out talking about performance processes, including promotions. Your baseline performance process is each manager providing written feedback for each of their direct reports, including a decision on whether to promote them, but there are quite a few details and variations to consider.
The first variations to consider are whether to include peer and upward feedback. Upward feedback is a constrained problem, as each person should only have one manager. In the worst case, asking for upward feedback generates low-value feedback, often because the individual doesn’t want to criticize their manager, but it doesn’t take up too much time.
Peer feedback can take up a significant amount of time, particularly for highly connected individuals who may be asked to provide peer feedback on ten or more individuals. This is usually accompanied with the advice that you can decline peer feedback requests if you get too many, but many individuals find it difficult to decline peer feedback requests, even if they know they should.
More importantly, my experience is that peer feedback is very inconsistent, and I’ve come to believe that each’s team’s beliefs are the value of peer feedback determine whether the feedback is actually useful. I’ve managed teams who feel peer feedback is too uncomfortable to give honestly, and those teams have provided useless peer feedback: in those cases, it’s not worth collecting peer feedback. I’ve also managed teams who believed feverishly in the value of peer feedback, and those teams generated insightful, valuable feedback. As such, I prefer to lean towards empowering managers to make the decision on collecting peer feedback for their team. Often this is a policy decision enacted for the overall company, and in that case it’s not a battle I’d pick.
Agreeing on performance ratings and who should be promoted is nearly impossible without written criteria that describe the dimensions of expected performance for each level. However, before we can talk about leveling rubrics, first we have to talk about levels.
Most companies have paired titles and levels, such as:
The specific levels vary widely across companies (there many sites that show how levels differ across companies), and what is a “Level 3” at some companies might be a “60” at another, and a “601” at a third. There is no consistent leveling standard across companies. It’s fairly common for Software Engineering levels to start at “Level 3”, as companies use levels across many functions, and often reserve “Level 1” for entry-level roles in roles with fewer entry requirements.
Titles vary even more widely across the industry, and there certainly isn’t a universal standard to adopt. If you are in the position of setting titles for your company, I recommend using the fairly typical progression of Entry-Level Software Engineer, Software Engineer, Senior Software Engineer, Staff Software Engineering, and Sr Staff Software Engineer. If you’re tempted to experiment with new titles, note that the downside is that it makes your hiring process more complex since you have to explain what the titles mean, and you will lose some candidates who are worried the non-standard titles will harm their career trajectory.
Once you establish Engineering’s titles and levels, the next step is documenting the leveling rubrics that describe expectations for operating within each level (again, there are a variety of sites that collect publicly available leveling rubrics from many companies). This can be a very sizable endeavor, and I’d recommend skipping the hardest part by picking a reasonably good one that’s available online, creating a working group to tweak the details, and then refining it after every performance cycle to address issues that come up.
Additionally, I’d emphasize a few things that I’ve learned the hard way over time:
Prefer concise leveling rubrics over comprehensive ones: there’s a strong desire for leveling rubrics to represent the complete, clear criteria for being promoted. The challenge, of course, is that many folks are exceptionally good at gaming specific criteria. For example, Stripe’s promotion criteria included mentorship, and I encountered folks who claimed to mentor others because they scheduled a meeting with that person, unrequested, and said that constituted mentorship.
Concise rubrics require more nuanced interpretation, but attempts to game rubrics mean that all options in practice require significant interpretation. You can respond to each attempt at gaming with even more comprehensive documentation, but your rubrics will quickly become confusing to use, more focused on preventing bad behavior than providing clear guidance for the well-intentioned.
Prefer broad job families over narrow job families: a classic executive decision is whether Site Reliability Engineers and Software Engineers should have different leveling criteria. Let’s say you decide that yes, separate criteria would be more fair. Great! Shouldn’t you also have separate rubrics for Data Engineers, Data Scientists, Frontend Engineers, and Quality Assurance Engineers?
Yes, each of those functions would be better served by having its own rubric, but maintaining rubrics is expensive, and tuning rubrics requires using them frequently to evaluate many people. Having more rubrics generally means making more poorly tuned promotion decisions, and creating the perception that certain functions have an easier path to promotion. I strongly recommend reusing and consolidating as much as possible, especially when it comes to maintaining custom rubrics for teams with fewer than ten people: you’ll end up exercising bespoke judgment when evaluating performance on narrow specializations whether or not you introduce a custom rubric, and it’s less expensive to use a shared process.
Capture the how (behavior) in addition to the what (outcomes): some rubrics are extremely focused on demonstrating certain capabilities, but don’t have a clear point of view about being culturally aligned on accomplishing those goals. I think that’s a miss, because it means you’ll promote folks who are capable but accomplish goals in ways that your company doesn’t want. Rubrics–and promotions–should provide a clear signal that someone is on the path to success at the company they work in, and that’s only possible if you maintain behavioral expectations.
My final topic around with levels and leveling rubrics is that you should strive for them to be an honest representation of how things work. Many companies have a stated leveling and promotion criteria–often designed around fairness, transparency and so on–which is supplemented by a significant invisible process underneath that governs how things actually work. Whenever possible, say the awkward part out loud, and let your organization engage with what’s real. If promotions are constrained by available budget and business need, it’s better to acknowledge that than to let the team spend their time inventing an imaginary sea of rules to explain unexpected outcomes.
With leveling criteria, you can now have grounded discussions around which individuals have moved from one level to another. Most companies rely on managers to make a tentative promotion nomination, then rely on a calibration process to ratify that nomination. Calibration is generally a meeting of managers who talk through each person’s tentative rating and promotion decision, with the aim of making consistent decisions across the organization.
In an organization with several hundred engineers, a common calibration process looks like:
The above example has three rounds of calibration (sub-organization, organization, executives), and each round will generally take three to five hours from the involved managers. The decisions significantly impact your team’s career, and the process is a major time investment.
The more calibrations that I’ve done, the more I’ve come to believe that outcomes depend significantly on each manager’s comfort level with the process. One way to reduce the impact of managers on their team’s ratings is to run calibration practice sessions for new managers and newly joined managers, to give them a trial run at the process before their performance dictates their team’s performance outcomes.
Another way is for you, as the functional executive, to have a strong point of view on good calibration hygiene. You will encounter managers who filibuster disagreement about their team, and you must push through that filibuster to get to the correct decisions despite their resistance. You will also find managers who are simply terrible at presenting their team’s work in calibration meetings, and you should try to limit the impact on their team’s ratings. In either case, your biggest contribution in any given calibration cycle is giving feedback to your managers to prepare them to do a better job in the subsequent cycle.
While most companies rely on the same group to calibrate performance ratings and decide on promotions, some companies rely on a separate promotion committee for the later decision, particularly for senior roles. The advantage of this practice is that you can bring folks with the most context into the decision, such that Staff-plus engineers can arbitrate promotions to Staff-plus levels, rather than relying exclusively on managers to do so. The downside is that it is a heavier process, and often generates a gap between feedback delivered by the individual’s manager and the decision rendered by the promotion committee, which can make the process feel arbitrary.
The flipside of promotions are demotions, often referred to via the somewhat opaque euphemism, “down leveling.” Companies generally avoid talking about this concept, and will rarely acknowledge its existence in any formal documentation, but it is a real thing that does indeed happen.
There are three variants to consider:
All of these approaches are a mix of fair or unfair, and come with heavy or light bureaucratic aftereffects to deal with going forward. These bureaucratic challenges are why most companies try to avoid demotions entirely. Further, the concept of “constructive dismissal” means that demotions need the same degree of documentation as dismissals. It’s certainly not a time saving approach.
I avoided demotions entirely for a long time, but I have found demotions to be effective in some cases. First, there are scenarios where you mis-level a new hire. They might come in as a Staff Engineer (L6), but operate as a Senior Engineer (L5). In that scenario, your options are either to undermine your leveling for everyone by retaining an underperforming Staff Engineer–which will make every promotion discussion more challenging going forward–or to adjust their level down. I’ve done relatively few demotions, but few is not zero. I have demoted folks in my organizations, as well as those I directly managed, and the outcomes were better than I expected in every case where outright dismissal felt like the wrong solution.
When you’re designing processes, I think it’s helpful to think about whether you’re trying to raise the floor of expected outcomes (“worst case, you get decent feedback once a year”) or trying to raise the ceiling (“best case, you get life changing feedback”). Very few processes successfully do both, and both performance processes focus on raising the floor of delivered feedback. This is highlighted by the awkward, but frequent, advice that feedback in a performance process should never be a surprise.
Because performance processes usually optimize for everyone receiving some feedback, it’s unwise to rely on them as the mechanism to give feedback to your team. Instead, you should give feedback in real time, on an ongoing basis, without relying much on the performance process to help. If you’re giving good feedback, it simply won’t help much.
This is particularly true as your team gets more senior. If senior folks are getting performance feedback during the performance process, then something is going very wrong. They should be getting it much more frequently.
One of the trickiest aspects of performance management is when you end up managing a function that you’ve never personally worked in. You may be well calibrated on managing software engineer’s performance, but feel entirely unprepared to grade Data Scientists or Quality Assurance Engineers. That’s tricky when you end up managing all three.
What I’ve found effective:
This certainly is tricky, but don’t convince yourself that it can’t be done. Most executives in moderately large companies are responsible for functions that they never worked in directly.
As an Engineering executive, you will generally be the consumer of a compensation process designed by your People team. In that case, your interaction may come down to reviewing the proposed changes, inspecting for odd decisions, collecting feedback from senior managers about the proposals for their team, and making spot changes to account for atypical circumstances.
That said, I have found it useful to have a bit more context on how these systems typically work, and I will walk through some of the key aspects of how these processes typically work:
Companies typically build compensation bands by looking at aggregated data acquired from compensation benchmarking companies. Many providers of this data rely on companies submitting their data, and try to build a reliable dataset despite each company relying on their own inconsistent leveling rubrics. You’ll often be pushed to accept compensation data as objective truth, but recognize that the underlying dataset is far from perfect, which means compensation decisions based on that dataset will be imperfect as well.
Compensation benchmarking is always done against a self-defined peer group. For example, you might say you’re looking to benchmark against Series A companies headquartered in Silicon Valley. Or Series B companies headquartered outside of “Tier 1 markets” (“Tier 1” being, of course, also an ambiguous term). You can accomplish most compensation goals by shifting your peer group: if you want higher compensation, pick a more competitive peer group, if you want lower compensation, do the opposite. Picking peers is more an art than a science, but it’s another detail to pay attention to if you’re getting numbers that feel odd.
Once you have benchmarks, you’ll generally discuss compensation using the compa ratio, which expands to “comparative ratio.” Someone whose salary is 90% of the benchmark for their level has a 0.9 compa ratio, and someone who has 110% of the benchmark for their level has a 1.1 compa ratio.
Each company will have a number of compensation policies described using compa ratios. For example, most companies have a target compa ratio for new hires of approximately 0.95 compa, and aim for newly promoted individuals to reach approximately 0.9 compa at their new level after their promotion. Another common example is for companies to have a maximum compensation of 1.1 compa ratio for a given level: after reaching that ratio, your compensation would only increase as the market shifts the bands or if you were promoted.
Every company has a geographical adjustment component of their compensation bands. A simple, somewhat common, implementation in the United States is to have three tiers of regions–Tier 1, Tier 2 and Tier 3–with Tier 2 taking a 10% compensation reduction, and Tier 3 taking a 20% reduction. Tier 1 might be Silicon Valley and New York, Tier 2 might be Seattle and Boston, and Tier 3 might be everywhere else. Of course, some companies go far, far deeper into both of these topics as well, but structurally it will be something along these lines.
Whatever the compensation system determines as the correct outcome, that output will have to be checked against the actual company budget. If the two numbers don’t align, then it’s almost always the compensation system that adjusts to meet the budget. Keep this in mind as you get deep into optimizing compensation results: no amount of tweaking will matter if the budget isn’t there to support it.
Whatever the actual numbers end up being, remember that framing the numbers matters at least as much as the numbers themselves. A team that is used to 5-7% year over year increases will be very upset by a 3% increase, even if the market data shows that compensation bands went down that year. If you explain the details behind how numbers are calculated, you can give your team a framework to understand the numbers, which will help them come to terms with any surprises that you have to deliver.
Everyone has strong opinions about the frequency of their company’s performance cycles. If you run once a year, folks will be frustrated that a new hire joining just after the cycle might not get any formal feedback for their first year. If you run every quarter, the team will be upset about spending so much time on the process, even if the process is lightweight. This universal angst is liberating, because it means there’s no choice that will make folks happy, so you can do what you think will be most effective.
For most companies, I recommend a twice annual process. Some companies do performance twice a year, but only do promotions and compensation once a year, which reduces the overall time to orchestrate the process. There’s little evidence that doing more frequent performance reviews is worthwhile.
The only place I’ll take a particularly firm stand is against processes that anchor on each employee’s start date and subsequent anniversaries. For example, each employee gets a performance review on their anniversary of joining the company. This sort of process is very messy to orchestrate, makes it difficult to make process changes, and prevents inspecting an organization’s overall distributions of ratings, promotions or compensation. It’s an aesthetically pleasing process design, but it simply doesn’t work.
In The Engineering executive’s role in hiring, my advice is to pursue an effective rather than perfect hiring process, and that advice applies here as well. There is always another step to improve your performance or compensation process’ completeness, but good processes keep in mind the cost of implementing each additional step. Many companies with twenty employees provide too little feedback, but almost all companies with 1,000 employees spend most of their time on artifacts of performance that could be devoted instead to giving better feedback or on the business’ underlying work itself rather than meta-commentary about that work.
As an executive, you are likely the only person positioned to make the tradeoff between useful and perfect, and I encourage you to take this obligation seriously. If you abscond this responsibility, you will incrementally turn middle-management into a bureaucratic paper-pushing function rather than a vibrant hub that translates corporate strategy into effective tactics. Each incremental change may be small enough, but in aggregate they’ll have a significant impact.
If you want to get a quick check, just ask your team–particularly the manager of managers–how they feel about the current process, and you’ll get a sense of whether the process is serving them effectively. If they all describe it as slow and painful, especially those who’ve seen processes at multiple companies, then it’s worth considering if you’ve landed in the wrong place.
This post has covered the core challenges you’ll encounter when operating and evolving the performance and compensation processes for your Engineering organization. With this background, you’ll be ready to resolve the first batch of challenges you’re likely to encounter, but remember that these are extremely deep topics, with much disagreement, and many best practices of a decade ago are considered bad practice today.
Augusto Campos 02/09/2023 | Source: TRILUX
Jerry Seinfeld on comedians constantly writing new jokes:
“I want to see your best work. I’m not interested in your new work.”
Applies to so many things in the content world.
--
Via Morgan Housel.
O artigo "“I want to see your best work. I’m not interested in your new work.”" foi originalmente publicado no site TRILUX, de Augusto Campos.
Anonymous 01/09/2023 | Source: is this it?
Grahame Sydney’s book The Art of Grahame Sydney has been on the coffee table the last couple of months. Central Otago landscapes are difficult to capture in central Melbourne, but I’ve taken inspiration from his other works and created this frame.
All photos were shot on the Olympus OM-1 with Kodak 250X film. Below are a few experiments with a 100m lens. I usually shoot at the 50mm focal length, so found the 100mm challenging in framing and pre-visualising shots.
Cut
The characteristic of the film shows through in the sky. Although empty, it’s still engaging.
Augusto Campos 29/08/2023 | Source: TRILUX
Num sábado de 2015 eu acordei cedo pra ir num sebo1 procurar revistas de época mostrando como era a vida nos anos 80 e 90, pra um projeto novo e, entre um exemplar de Pais & Filhos, uma Playboy e uma Revista Geográfica Universal amarelada, ouço um tiozão conversando com o dono do estabelecimento sobre uma biografia do Duarte Schutel, que ele não encontrava em lugar nenhum.
O dono ajuda o cara a procurar na web, encontram, vão encomendar. Nesse meio tempo, eu (que também curto biografias) lembrei de já ter visto esse mesmo livro em outra visita ao mesmo sebo, na semana anterior, pro mesmo projeto.
Procuro o livro, que estava na prateleira da literatura local, e não na das biografias, encontro, e entrego ao senhor, que ficou embascado. E o dono do sebo, que tem estoque informatizado (mas furadaço, logo se vê), ficou mais impressionado ainda.
O tio aperta minha mão (revelando ser maçom, inclusive), paga ao caixa a fortuna de R$ 5,00, abre o livro e descobre que o exemplar era autografado pela autora.
Saio de cena comentando: e quando o senhor escrever a sua autobiografia, não esqueça da manhã de sábado em que um estranho encontrou para o senhor esse livro raro autografado sobre a vida de um de seus irmãos e foi embora sem nem lhe dizer seu nome!
A referência aos irmãos2 foi demais pra ele, que saiu correndo pra tentar corrigir o faux pas, se apresentar e saber mais, mas eu já tinha entrado no carro e fiz que nem vi, pra não estragar o efeito.
O artigo "Crônica de um sábado cultural" foi originalmente publicado no site TRILUX, de Augusto Campos.
Yegor Bugayenko 29/08/2023 | Source: Yegor Bugayenko
The release of ChatGPT 3.5 has changed everything for us programmers. Even though most of us (including me) don’t understand how it works, some of us use it more frequently than Stack Overflow, Google, and IDE built-in features. I believe this is just the beginning. Even though, only Microsoft knows what will happen next, let me try to make a humble prediction too. Below, I list what I believe robots (with Generative AI on board) will do in the future. The further into the future, the lower on the list. I tried not to repeat what GitHubNext is already saying.
Report Bugs. They will go through the codebase, analyze the code, and maybe even try to run some tests, then submit bug reports when problems are obvious. They will also submit bug reports when they find code that is hard to understand, improperly documented, not covered by automated tests, or has security vulnerabilities. Additionally, they will report when they see that the code is not following conventions or best practices. They will write their reports so nicely and provide so many technical details and supplementary links that programmers will prefer the reports from robots much more than reports from humans.
Review Pull Requests. They will examine the pull requests submitted to the repository (either by humans or robots) and review them by making comments on certain lines of code, either criticizing the quality of the code and/or suggesting better alternatives. They will keep track of the suggestions made earlier and will insist where necessary. In the end, the authors of the pull requests won’t even know who is reviewing them: a human or a robot.
Refactor. From a huge collection of well-known micro-refactorings, they will select the few most important at any given moment and will submit pull requests with the changes. They won’t alter the functionality of the code or make massive modifications. Instead, they will improve the quality of the code in small increments, making it easy for us humans to merge their suggested changes. They won’t change too much, so we won’t feel managed by robots, but we will be. Slowly and incrementally, they will improve the codebase, making it more readable, maintainable, and better understood … by other robots.
Backlog Prioritization. They will sort tasks and tickets into their appropriate milestones, determining which ones are of higher priority. They will decide which bug should be fixed first and which feature request is more important than others. Utilizing historical data, current team velocity, and other relevant metrics, they will create a prioritized backlog that aligns with both short-term objectives and long-term goals.
Refine Bug Reports. They will examine already reported bugs and refine them, providing supplementary information, explaining the code to which the bug refers, and suggesting code snippets that could potentially reproduce the bug. They will do the work that most programmers are too lazy to do: properly explain the bug in order to help its fixer.
Document Source Code. They will find places in the code that are hard to comprehend, such as complex functions, large classes, and big data structures. They will generate documentation blocks and then submit pull requests with them. Humans will be happy to accept these, since documenting someone else’s code is a routine and boring part of work. Moreover, keeping the documentation in sync with the source code is one of the areas where our human negligence is most visible.
Fix Bugs. According to the code they already see in the codebase and the list of bugs reported in issues, they will generate some fixes and submit them as new pull requests. They will explain what the fixes are doing, why the improvement is made in this or that way, how critical the fix is, and also suggest possible alternatives. We will simply merge them.
Formalize Requirements. They will examine the codebase and the comments where we discuss it, and will derive a formal definition of the requirements we implement. Then, they will formulate the requirements using Use Case diagrams, Requirement Matrix, or even informal textual documents like README or Wiki. They will keep these documents up to date throughout the entire lifecycle of the codebase—something we humans are often too lazy to do.
Onboard: They will assist in the onboarding process of new developers, guiding them through the codebase, explaining architectural decisions, and offering personalized tutorials. They will also help us understand certain code blocks by providing interactive guidance.
Analyze Technical Debt They could analyze the codebase to identify areas where technical debt is accumulating and suggest steps to mitigate it before it becomes problematic. They will submit tickets where the biggest debt territories will be identified and improvements suggested.
Cleanup Documentation. They will reformat the doc blocks that we humans write for our classes and methods, and then submit pull requests with the changes. Formatting the documentation correctly, using HTML, Markdown, Doxia, and many other formats, is a boring work where we humans fall short.
Suggest New Features. They will examine already implemented functionality and will suggest additional features, submitting tickets. They will explain the reasons behind such new feature requests, find proper justification, and provide examples of how users will interact with the new functionality.
Document Architecture. They will observe the codebase and then update the documentation about the architecture it implements. This is something programmers usually forget to do, or simply don’t know how to do right. The robots will use UML or perhaps less formal instruments to document the architecture, thus making the entire product easier to maintain.
Estimate. They will estimate the complexity of every bug report or feature request in staff-hours, calendar days, or maybe even in lines of code. This information will help the team make planning decisions.
Predict. By examining events in a repository, they will spot anomalies in our human behavior, such as changes in the mood of programmers in the comments, spikes in the intensity of commits, failures in CI/CD pipelines, and so on. They will be able to predict larger troubles before it’s too late. They will predict and then suggest corrective and preventive actions, submitting tickets with management or technical suggestions.
Appraise. They will observe the activity of every programmer and will appraise their productivity. The results will be published directly to GitHub issues or perhaps sent to project managers by email. In the end, they will decide who of us humans are more valuable to their projects.
I’m thankful to ChatGPT for helping me build this list.
What do you think I we missed?
Adrian 28/08/2023 | Source: death and gravity
Hi there!
I'm happy to announce version 3.9 of reader, a Python feed reader library.
Here are the highlights since reader 3.7.
Unexpected exceptions raised by update hooks, retrievers, and parsers are now wrapped in UpdateError, so errors for one feed don't prevent others from being updated. Also, hooks that run after a feed is updated are all run, regardless of individual failures. Plugins should benefit most from the improved fault isolation.
The API docs got a cool new exception hierarchy diagram (yes, it's autogenerated):
ReaderError
├── ReaderWarning [UserWarning]
├── ResourceNotFoundError
├── FeedError
│ ├── FeedExistsError
│ ├── FeedNotFoundError [ResourceNotFoundError]
│ └── InvalidFeedURLError [ValueError]
├── EntryError
│ ├── EntryExistsError
│ └── EntryNotFoundError [ResourceNotFoundError]
├── UpdateError
│ ├── ParseError [FeedError, ReaderWarning]
│ └── UpdateHookError
│ ├── SingleUpdateHookError
│ └── UpdateHookErrorGroup [ExceptionGroup]
├── StorageError
├── SearchError
│ ├── SearchNotEnabledError
│ └── InvalidSearchQueryError [ValueError]
├── PluginError
│ ├── InvalidPluginError [ValueError]
│ └── PluginInitError
└── TagError
└── TagNotFoundError
I moved all modules related to feed retrieval and parsing to reader._parser, another step towards internal API stabilization. This has also given me an opportunity to make lazy imports a bit less intrusive.
There's a new timer experimental plugin to collect per-call method timings.
The web app shows them in the footer like so:
Python 3.9 support is no more, as foretold in the ancient murals.
For more details, see the full changelog.
That's it for now.
Want to contribute? Check out the docs and the roadmap.
Learned something new today? Share this with others, it really helps!
reader takes care of the core functionality required by a feed reader, so you can focus on what makes yours different.
reader allows you to:
...all these with:
To find out more, check out the GitHub repo and the docs, or give the tutorial a try.
Have you been unhappy with existing feed readers and wanted to make your own, but:
Are you already working with feedparser, but:
... while still supporting all the feed types feedparser does?
If you answered yes to any of the above, reader can help.
So you can:
Obviously, this may not be your cup of tea, but if it is, reader can help.
Brett Cannon 27/08/2023 | Source: Tall, Snarky Canadian
I was talking with someone about how Andrea and I have been consciously taking less flights since the pandemic started in order to lower our carbon footprint (Take the Jump suggests a flight under 1500km every 3 years, longer than that every 8 years; heard about this from David Suzuki), and how that probably means always driving to PyCascades (thanks to our EV), flying to PyCon US (or EuroPython depending on things) and the core dev sprints, and that potentially being it for conference travel unless I combine it with a holiday. The person I was chatting with then asked me why I seemed to be willing to sacrifice some happiness from conferences for the planet when my individual carbon footprint is miniscule compared to entire countries who are not seemingly putting in as much effort as I am? I honestly wasn't prepared for that question, so I didn't have a good way to articulate why. But now that I have reflected on it, this blog post records my reasons for putting in at least some effort to lower my carbon footprint at the cost of some happiness for myself.
First, I think every little bit helps. I think of it in terms of a fighting game like Street Fighter 2 or Mortal Kombat: you might survive by a sliver of life, but a win is a win. Since I don't know what the magic tipping point is for the climate crisis to spiral out of control and destroy this planet for human beings, I would rather help keep even a sliver of health on that life bar for the planet instead of looking back on my life on my deathbed and wondering if I should have done more (at my age, I very much expect to make it to 2050 and see how good/bad things look for the rest of the century)?
Second, I want to influence however I can everyone around me who votes to help push politicians to do their work to fight the climate crisis as that's where real gains can be made. This is essentially trickle-up ethics where I am trying to influence those around me, to then influence those around them, and so on and so forth, until politicians realize people care about the environment and they need to make changes to keep their jobs (or lives depending on the political system). This is a bit of a slog as you end up needing to have conversations over years on the climate with the same people, but I have seen changes in folks like my in-laws who are (unfortunately) the primary generation of folks who bother voting, so getting them to change their minds is important.
Anyway, so that's why I bother doing what I consider my part in lowering my carbon footprint. As I said, I fully realize I could do more, but I am still willing to make some sacrifices to help out as I don't know if my small effort won't have some trickle-on effect that leads to marked improvements. And if we all did a small bit of sacrificing, it can add up in various ways whether its directly in the atmosphere or via ethical views of society.
Anonymous 27/08/2023 | Source: Irrational Exuberance
Everyone in an engineering organization contributes to the hiring process. As an engineer, you may have taken pride in being an effective interviewer. As an engineering manager, you may have prioritized becoming a strong closer, convincing candidates to join your team. As a more senior manager, you will have likely shifted focus to training others and spending time with candidates for particularly senior roles.
As an engineering executive, your role in the hiring process will shift once again. You’ll continue to make some key leadership hires yourself, but you’ll spend more and more time designing and debugging your overall interview process.
In this post, we’ll cover:
After reading through, you’ll have a clear plan for structuring your overall hiring process, as well as your specific role within that new process. You’ve spent much of your career serving a hiring process, and now you need to create a system that serves you.
This is an unedited chapter from O’Reilly’s The Engineering Executive’s Primer.
Unless you’re joining an extremely early-stage company, the engineering organization will already have some sort of hiring process in place. Unless there’s widespread agreement that the current process isn’t working, you should participate in the existing process to get a feel for how it works.
It’s almost always the case that you can adapt the existing process to accomplish your goals rather than starting from scratch, and incorporating what already exists will both simplify retraining the team on the new process and build good-will with the folks who built the previous process.
Regardless of where you start, your final process should include every one of these components:
Applicant Tracking Systems (ATS): a good ATS is the core mechanism for coordinating your interviewing process. Although many early companies try, running an effective hiring process without an ATS is time intensive with limited return: don’t try it. There are enough reasonable options out there that I won’t recommend any one in particular.
Interview loop documentation: every role should have a documented interview loop that covers the interviews, the trained interviewers for each interview, and links to each interviews’ definition and rubric..
Leveling framework: articulate how you level candidates based on their interview performance and prior experience. In particular, describe when you level candidates in your process.
Interview definition and rubric: define an explicit problem or set of questions to ask for each interview. Then add an explicit rubric for evaluating that candidate’s answers. My experience is that it’s preferable to be very consistent on which questions to ask. For example, using the same programming problem for all candidates’ programming interviews.
A frequent pushback is that candidates will cheat by learning the problem from previous candidates, which is certainly possible. However, I’ve found the risk of cheating is still lower than the risk of poor signal due to solving inconsistent problems. (Furthermore, it’s usually pretty clear which candidates are cheating. Make sure you have additional sections for them to complete if they go fast, and note if their ability to solve those sections degrades in a surprising way.)
Hiring role definitions: every interviewer, hiring manager, and recruiter will engage with your hiring process using assumptions built on the prior processes they’ve worked in. This will often lead to disagreement between hiring managers and recruiters about who’s responsible for the closing process, who has input on the offer’s compensation details, and so on. The best way to avoid this is being very explicit about who is responsible for what.
Job description template: you should create a baseline template for job descriptions, with a consistent structure and background on your organization, benefits, and mission.
Job description library: hiring managers should use the job description template to write a job description for each role they hire. These job descriptions should be aggregated in a shared folder where they can be reused rather than reinvented. This also simplifies keeping descriptions updated as you refine shared components.
Hiring manager and interviewer training: finally, the last component of an effective hiring process is a clear mechanism for training your interviewers. The most effective process I have seen is having new interviewers shadow several trained interviewers, combined with one reverse-shadow interview where an experienced interviewer shadows (and gives feedback to) the new interviewer.
There are certainly other approaches to consider, including training materials or classes, but I’ve found that many interviewers simply don’t listen in those training, whereas shadowing and reverse-shadowing is much harder to fake your participation.
If you’re joining a relatively scaled engineering organization, it’s likely that most of these will already exist, and that you can quickly formalize the few undocumented portions. On the other hand, if you’re joining a smaller organization, it’s quite possible that you’ll start from a place where none of these materials exist. In the latter case, I’d aim to introduce one or two components at a time over the course of a year: going too fast will overwhelm the team, but isolating changes will lead to retraining fatigue from the team as their hiring process changes repeatedly.
The two biggest errors that executives make in designing their hiring processes are not designing a process at all—hopefully addressed by the preceding section—and designing overly heavy processes that make it ineffective to hire. The latter is particularly challenging to notice, because you’ll often believe you are optimizing the process when you’re actually slowing it down.
The three clearest indications that you’ve over-optimized your hiring process are:
Each of these indicate a process that’s consuming a lot of energy without generating much impact. Often the cause is an indecisive executive who adds steps to find clearer signals, which generally obscures reality rather than clarifies it. I’ve also seen this caused by well-meaning, structured thinkers who are trying to replace biased human judgment with more structured approaches. Neither of these are inherently bad ideas, and it’s through inspecting the above indicators that you can check whether you’re really improving your process or if it just feels like progress.
As the responsible executive, I recommend you require a high bar for each extension to your hiring process. Even if individually they make a great deal of sense, in aggregate they will quickly make the process too cumbersome to operate. Each specialized interview loop, each additional interview to design and train the team on, each approval step, each movement from an accountable individual to a committee–all of these will improve quality, but often in a way that leads to worse outcomes as the process grows heavy. If the current process works, even if it’s not ideal, push the team to work with it rather than extend it. You should certainly modify the process when it’s wholly broken, or when you can improve the standard path for everyone, but stay wary of specializations, customization, and the bespoke.
Once you’ve built the hiring process, your job as an executive is generally to monitor and debug it, rather than serve within it. This is particularly true after Engineering grows past 100 members, at which point you’ll be directly involved in the process for a small fraction of the senior-most hires.
Here are the mechanisms I’ve found effective in monitoring and debugging hiring, in the order that I’d consider rolling them out if I joined a company without much oversight:
Include Recruiting in your weekly team meeting: your Engineering leadership team should have a weekly team meeting, and I strongly encourage including a tech recruiter in that meeting. Their presence makes it possible to troubleshoot recruiting topics quickly and transparently. Some topics may not be particularly interesting to the recruiters, but that’s true for some members of most standing working meeting.
In particular, this is by far the easiest place to change hiring priorities without anyone feeling left out of the loop.
Hiring review meeting: meet once a month with the Engineering recruiting lead, and talk through their priorities, as well as any problems that have come up. Keeping this meeting small, typically just the two of you, means you can troubleshoot difficult issues that may be hard to discuss in your team meeting.
Visibility into hiring approval: although you should likely not be approving every hire in your organization, it’s extremely valuable to have a single place where you can see all the approvals. Often this is a private chat with Engineering’s hiring managers and recruiters, where each offer is approved.
Out-of-band compensation approval: this is discussed more below, but similarly to seeing all hiring approval, it’s even more helpful to be an approver on all atypical candidate offers. This gives you visibility into the places where your standard operating process isn’t working for some reason.
Monthly hiring statistics: have Recruiting report on hiring statistics for each role they’re currently hiring. It’s particularly helpful to understand throughput (hires per recruiter), time to hire, offer rate, and acceptance rate. Those four metrics, cohorted by each role, should be enough for you to identify the next set of questions to dig into.
There are, of course, always more meetings and tools that you can introduce. I’d recommend starting with a couple and going from there. As you’ve probably picked up by now, my experience is generally that you can go faster by making incremental changes than by introducing massive changes, even when your goal is transformation.
Sometimes executives insert themselves as a final interview in the hiring process, even after their organization becomes quite large. Executives who do this tend to swear by it as an essential step, where only they can ensure the quality bar for hires remains exceptionally high. However, it tends to significantly slow down the hiring process, and even executives who believe in the most strongly will eventually scale back on this practice.
However, while it’s unscalable to remain as an interviewer across all loops, it is particularly valuable to remain engaged in helping close senior candidates. As an executive, you should be able to tell Engineering’s story and how it contributes to the larger company story, and why that makes for interesting work. You’re also best placed to address strategic concerns the candidate raises.
The approach I’ve found helpful here is three-fold:
I’ve only found this counterproductive in two scenarios. First, some executives add a sell call as a mandatory part of the hiring process, which often creates more friction than it’s worth. There are candidates who are excited to accept without meeting you, and for them the additional sell call will slow things down, and executives are particularly painful to schedule. Second, there are some executives who are exceptionally bad at selling their organization. A friend once did a sell call with an executive who turned out to be watching online videos during the call, which unsurprisingly did not make them feel like a valued candidate.
Within the hiring process, the two most contentious topics tend to be determining compensation details for each candidate, and determining the candidate’s level. You can’t determine appropriate compensation for a candidate without knowing their level, so we’ll start there.
The first question to answer is when you level candidates in your process. The obvious answer is that you level candidates after seeing their interview performance, but there are a few issues with that. Most importantly, you likely want to conduct a different process to evaluate very senior candidates than to evaluate early career candidates. At a minimum, you’d want the interviewers to be different, as it’s relatively rare for a panel of mid-level interviewers to decide a candidate is Staff-plus, and you likely wouldn’t be confident in their evaluation even if they did.
Generally I recommend provisionally leveling candidates before they start the bulk of your interview process. For example, you might provisionally level them after they complete the technical phone screen, allowing you to customize the remainder of their process for that provisional level. You can then finalize the leveling decision as part of deciding whether to make an offer. I recommend relying on a simple provisional leveling heuristic such as a combination of technical phone screen performance and years of prior experience. This is far from perfect, but there’s generally enough signal there to determine the range of plausible levels.
The final leveling decision should be guided by a written leveling framework, which looks at the candidate’s holistic interview performance to determine a level. Part of that framework is handling disagreement around leveling, which is particularly common. The most common approach is that:
Some companies, particularly larger ones, rely on a committee rather than individual hiring managers for these decisions. My experience is that committees appear less biased, but generally introduce a bias of their own, and are less efficient than wholly accountable individuals. The counter-balance is that at a certain scale, it’s simply impossible to centralize these decisions without significantly slowing down your hiring process. I recommend introducing committees only after relying on individuals has proven too slow at your rate of hiring.
Compensation is a broad topic, which I’ll write about more in my next post, but a quick overview on determining compensation details for your offers. There are two particularly important questions which should be detailed in your hiring role definitions: who calculates the initial offer, and what are the approval steps once an offer has been calculated?
The approach that I’ve found effective is:
A centralized approach where recruiters follow a structured process to calculate offers has many benefits. First, it facilitates training on the shared process, and retraining on that process as your compensation bands adjust, you experiment with offer strategy and so on. Second, it avoids less effective hiring managers leaning on compensation, such that those hired by worse hiring managers get outsized compensation packages. (Which is surprisingly common, although of course it’s almost always framed as the weak hiring manager pursuing exceptional candidates.) Finally, you can still design a process to break bands if you want to, but with a centralized mechanism to make it easier to both manage costs and drive consistency.
Some managers argue that this approach doesn’t give them enough flexibility to make compelling offers to the best candidates. That’s true, but I’ve consistently found that there’s always another way to close a candidate other than more compensation. Further, outsized compensation packages will always create ongoing problems in your annual compensation process, which will be designed to normalize compensation across individuals with similar performance ratings at a similar level. Your broader perspective as the Engineering executive is necessary to balance these incentives, whereas an individual hiring manager is almost always incentivized to hire even if it creates a long-term mess for the wider organization.
The intersection of headcount planning and hiring is discussed in How to plan as an engineering executive, but is worth mentioning here as well. In practice there are two fundamental modes of prioritizing hires:
In both cases, you’ll frequently have teams pushing for higher priority for their roles. I’m a believer in forcing leaders to solve within their constraints rather than frequently shifting those constraints, but my preference is just one of many ways to approach these tradeoffs. The most important thing to highlight is that both recruiter assignment and headcount are global constraints that you must control as the Engineering executive.
This control can either be something you do personally, or something you delegate to one individual to do on behalf of the wider organization, but they must be made centrally. Making these decisions centrally doesn’t mean that you have to spend a lot of time on it. The simplest way to sidestep this is to determine the headcount and recruiters for each Engineering sub-organization (roughly, each area corresponding to one of your direct reports) and then allow those sub-organization to optimize within their boundaries and allocations.
The biggest trap to avoid is prioritizing recruiters based on hiring need will often steer all recruiting capacity towards your least effective hiring managers. My learned belief is that slow hiring is almost always an execution issue by the hiring manager or the recruiter, and only infrequently the consequence of limited staffing. The exception is when you’ve opened too many concurrent roles for your current recruiter staffing, which is easy to diagnose by looking at the ratio of recruiters to roles (if you have more than three open roles per recruiter, something is very likely going wrong). If you really want to help, first consider spending time training the individuals involved rather than shifting headcount or recruiter staffing.
Earlier, I mentioned shadowing and reverse-shadowing as an effective mechanism to train interviewers. That is a crucial part of an effective hiring process, but there’s a second component of training that’s often ignored: training your hiring managers.
There’s a handful of particularly common hiring problems that are usually due to untrained or inexperienced hiring managers:
If you identify one of these, then I do recommend running focused trainings for your hiring managers on the specific topic that’s coming up. These are all topics that I’ve devoted a session of my Engineering Managers Monthly meeting to, talking through examples of why it’s problematic, why it’s not a sign of strong hiring, explaining what reasonable pass-through rates look like for a healthy hiring loop, and recommending strategies for overcoming the issue.
Once you’ve done a training session, you and the recruiting team should point out the issue to individuals who are running into it, and hold them accountable for fixing it. Folks making these mistakes will often have conviction that they’re doing the right thing, but don’t get swayed by their conviction. Effective hiring processes hire candidates. Hiring managers are accountable for their hiring process. Any argument suggesting one of these is false is a flawed argument.
When I worked at Yahoo!, our team needed another engineering manager. We didn’t run a hiring process, or even do interviews. Instead, our Director brought on a colleague he’d worked with before. That new manager soon decided he needed a tech lead on his team. We didn’t run a hiring process, do interviews, or consider candidates on the existing team. Instead, our new manager brought over one of his previous colleagues. A third previous colleague reached out to our Director, and without a single interview we’d soon hired a new Chief Architect who would ultimately never write or read a technical specification about our product, nor contribute a single line of code.
One of my teammates–one who had joined the team through the more traditional route of interviewing–described this pattern as the flying wedge, and it’s emblematic of the worst sort of network-hiring. Hiring exclusively from your network will convince your existing team that they and their networks aren’t wanted for important roles at your company.
A similar, somewhat common, scenario is one where your company exclusively fills important roles with external hires. Each individual external hire may make sense, but in aggregate the pattern of prioritizing external hires will encourage your team to seek career advancement elsewhere, draining your organization of context and continuity.
When it comes to internally or externally hiring and hiring within or without your network, the ideal path is always moderation. Hire some internally, some externally, some within and some without. Too much of any path will either isolate your culture from valuable opportunities to evolve, consolidate it onto the culture that worked at a former employer, or prevent it from coalescing to begin with.
In my experience, almost everyone agrees with the above statement, but quite a few don’t follow its advice. As I’ve dug into that, it’s generally because of a missing hiring skill:
Periodically look at the number of internal versus external hires for senior roles within your organization, and dig into areas where there are exclusively hires of one sort. If you find a lopsided pocket of your organization, talk with the relevant leader and push them to make one hire of the sort they’re currently ignoring. Even one will force them to acknowledge the skill gap, and start the process of fixing the imbalance.
If the person with a significant imbalance is you, then take it seriously! Don’t hide from it by justifying the unbalance with philosophical or intellectual rationales, and instead push yourself to make one hire of the other sort. Particularly for new executives, I often find there’s an underlying belief that they cannot close strong external candidates, and disproving that belief is an important part of your personal growth.
The details of building an engineering brand are discussed in Building personal and organizational prestige, which I’ll avoid repeating here in full. Instead, I’ll briefly repeat its conclusion regarding building Engineering brands in particular:
In general, if you’re already finding enough top-of-funnel candidates for your hiring process, don’t spend more time here unless you can connect that time to another business objective, or have internal folks who find this work energizing enough to take it on as a side project.
Many companies introduce centralized (Engineering-scoped) or semi-centralized (Product Engineering or Infrastructure Engineering-scoped) hiring committees as part of maintaining a consistent hiring process. I’ve seen this happen frequently enough in Silicon Valley companies that some executives have come to believe that hiring committees are a natural or ideal landing spot.
Hiring committees are a useful tool, but I’d caution against introducing them as the obvious solution. They’re useful, but come with their own problems.
I generally dislike committees as they introduce ambiguity in who should be held accountable for outcomes. In this case, they also mean that hiring decisions are made further from the particular team, which often degrades individual decisions. These committees are also vulnerable to misaligned members. I was once in a hiring committee where a new member joined who relied very heavily on the universities that candidates attended, even when we clarified to the member that we didn’t hire that way, they refused to change and our Engineering executive was unwilling to hold them accountable to our hiring practices.
On the positive side, they are also a great mechanism for training hiring managers’ judgment on what makes a good candidate. They also introduce more consistent hiring practices across an inconsistent organization, solving a similar problem as Amazon’s Bar Raiser program. Committees are slower than a responsive hiring manager, but faster than a disengaged or very busy hiring manager.
If you came up as a rules-minded leader, you can almost certainly think of examples where your executive responsible for designing the hiring process also ignored that process to accomplish an immediate goal. Personally, I was most annoyed by executives who steamrolled the process to hire former colleagues who performed poorly in our interview process. Each time I’d complain to colleagues, “Why did we build this comprehensive hiring process if we don’t even trust its decisions?”
As is often the case, as I switched into the role of the executive responsible for Engineering hiring, I began to appreciate why perfectly following the process was difficult. When I vowed to loyally follow the hiring bands, sometimes I’d find peer executives paying far outside the bands, implicitly penalizing hires in Engineering. When I endeavored to respect each negative hiring review, sometimes I’d encounter interviewers who refused to use the stated rubrics. When I hired for a brand new role, I’d sometimes find interviewers who interpreted the role’s requirements very differently, even if I pulled together materials explaining the role.
Each of those challenges can be solved over time with better training, but as an executive you rarely control the timeline you’re working in. Sometimes your problem is urgent today. In those scenarios, the question to answer is sometimes whether the company will be better off if you solve the underlying problem (e.g. missing a leader for a key role) or if you respect the process (e.g. don’t break the rules you created). You should try to solve your problem within the process you’ve designed, but don’t get so blinded by your process that you think the process is always more important than your problem at hand. Sometimes the process is clearly less important than the current problem.
That doesn’t mean you should always ignore your process. If your interview process indicates a candidate has gaps, there is usually a valuable signal when our hiring processes decline a candidate. Even if we’re confident the negative signals are wrong, it still undermines a hired individual when their new colleagues know they performed poorly in the hiring process but were nonetheless hired. There is a cost to defying your process, just as there is a cost to following it, and as an executive you need to make that tradeoff deliberately.
This post has covered your role as an executive in your organization’s hiring, the components you need to build for an effective hiring process, and provided concrete recommendations for navigating the many challenges that you’re likely to run into while operating the hiring process. There are an infinite number of questions to dig into, but this coverage will give you enough to get started, build a system that supports your goals, and start evolving it into something exceptionally useful.
Brandur Leach 26/08/2023 | Source: brandur.org
One of Go’s best features is not only that it does parallelism well, but that it’s deeply baked in. It’s best exemplified by primitves like goroutines and their dead simple ease of use, but extends all the way up the chain to the built-in tooling. When running tests for many packages with go test ./...
, packages automatically run in parallel up to a maximum equal to the number of CPUs on the machine. Between that and the language’s famously fast compilation, test suites are fast by default instead of something that needs to be painstakingly optimized later on.
Within any specific package, tests run sequentially, and as long as packages aren’t too mismatched in test suite size, that’s generally good enough.
But having uniformly sized package test suites isn’t always a given, and some packages can grow to be quite large. We have a ./server/api
package that contains the majority of our product’s API and ~200 tests to exercise it, and it’s measurably slower than most packages in the project.
For cases like this, Go has another useful parallel facility: t.Parallel()
, which lets specific tests within a package be flagged to run in parallel with each other. When applied to our large package, it reduced the time needed for a single run by 30-40% or by 2-3x for ten consecutive runs.
Before t.Parallel()
:
$ go test ./server/api -count=1
ok github.com/crunchydata/priv-all-platform/server/api 1.486s
$ go test ./server/api -count=10
ok github.com/crunchydata/priv-all-platform/server/api 11.786s
After t.Parallel()
:
$ go test ./server/api -count=1
ok github.com/crunchydata/priv-all-platform/server/api 0.966s
$ go test ./server/api -count=10
ok github.com/crunchydata/priv-all-platform/server/api 3.959s
These tests were already pretty fast (to beat a dead horse again: running every API test for this project is 3-5x+ faster than it took to run a single test case during my time at Stripe; language choice and infrastructure design makes a big difference), but this is one of the packages that we run tests on most frequently, so a 30-40% speed up makes a noticeable difference in DX when iterating.
After adding t.Parallel()
to this one package, we then went through and added it to every test in every package, and then put in a ratchet with the paralleltest
linter to mandate it for future additions.
Should you bother adding t.Parallel()
like we did? Maybe. It’s a pretty easy standard to adhere to when starting from scratch, and for existing ones it’ll be easier to add it today than at any point later on, so it’s worth considering.
As far as I can tell, no.
I like to use the Go language’s own source code to glean convention, and by my rough measurement only about 1/10th of its test suite uses t.Parallel()
:
# total number of tests
$ ag --no-filename --nobreak 'func Test' | wc -l
7786
# total number of uses of `t.Parallel()`
$ ag --no-filename --nobreak 't\.Parallel\(\)' | wc -l
620
This isn’t too surprising. As discussed above, parallelism across packages is usually good enough, and when iterating tests in one specific package, Go’s already pretty fast. For smaller packages adding parallelism is probably a wash, and for very small ones the extra overhead probably makes them slower (although trivially so).
Still, it might not be a bad idea. As some packages grow to be large, parallel testing will keep them fast, and annotating tests with t.Parallel()
from the beginning is a lot easier than going back to add it to every test case and fix parallelism problems later on.
The biggest difficulty for many projects will be to have a strategy for the test database that can support parallelism. It’s easy to build a system where multiple tests target the same test database and insert data that conflicts with each other.
We use test transactions to avoid this. Each test opens a transaction, runs everything inside it, and rolls the transaction back as it finishes up. A simplified test helper looks like:
func TestTx(ctx context.Context, t *testing.T) pgx.Tx {
tx, err := getPool().Begin(ctx)
require.NoError(t, err)
t.Cleanup(func() {
err := tx.Rollback(ctx)
if !errors.Is(err, pgx.ErrTxClosed) {
require.NoError(t, err)
}
})
return tx
}
Invocations of the helper share a package-level pgx pool that’s automatically parallel-safe (but still has a mutex to make sure that only one test case initializes it):
var (
dbPool *pgxpool.Pool
dbPoolMu sync.RWMutex
)
Usage is succinct and idiot-proof thanks to Go’s test Cleanup
hook:
tx := TestTx(ctx, t)
The trickiest problem I had to fix while enabling t.Parallel()
involved Postgres upsert. We have a number of places where we seed data with an upsert to guarantee that it’s always in the database regardless of whether the program has run before or is starting for the first time. In the test suite, individual test cases would upsert a “known” resource:
plan := dbfactory.Plan_AWS_Hobby2(ctx, t, tx)
Implemented as:
func Plan(ctx context.Context, t *testing.T, e db.Executor, opts *PlanOpts) *dbsqlc.Plan {
validateOpts(t, opts)
configPlan := providers.Default.MustGet(opts.ProviderID).MustGetPlan(opts.PlanID, true)
plan, err := dbsqlc.New(e).PlanUpsert(ctx, dbsqlc.PlanUpsertParams{
CPU: int32(configPlan.CPU),
Disabled: configPlan.Disabled,
DisplayName: configPlan.DisplayName,
Instance: configPlan.Instance,
Memory: configPlan.Memory,
ProviderID: opts.ProviderID,
PlanID: configPlan.ID,
Rate: int32(configPlan.Rate),
})
require.NoError(t, err)
return &plan
}
To my surprise, adding t.Parallel()
would fail many tests at these invocations. Despite every test case running in its own transaction, it’s still possible for them to deadlock against other as they tried to upsert exactly the same data.
We resolved the problem by moving to a fixture seeding model, so when the test database is being created, in addition to loading a schema and running migrations, we also load a common set of test data in it that all tests will share (test transactions ensure that any changes to it are rolled back):
.PHONY: db/test
db/test:
psql --echo-errors --quiet -c '\timing off' -c "DROP DATABASE IF EXISTS platform_main_test WITH (FORCE);"
psql --echo-errors --quiet -c '\timing off' -c "CREATE DATABASE platform_main_test;"
psql --echo-errors --quiet -c '\timing off' -f sql/main_schema.sql
go run ./apps/pmigrate
go run ./tools/src/seed-test-database/main.go
So the implementation becomes a lookup instead:
func Plan(ctx context.Context, t *testing.T, e db.Executor, opts *PlanOpts) *dbsqlc.Plan {
validateOpts(t, opts)
_ = providers.Default.MustGet(opts.ProviderID).MustGetPlan(opts.PlanID, true)
// Requires test data is seeded.
provider, err := dbsqlc.New(e).PlanGetByID(ctx, dbsqlc.PlanGetByIDParams{
PlanID: opts.PlanID,
ProviderID: opts.ProviderID,
})
require.NoError(t, err)
return &provider
}
We make fairly extensive use of logging, and previously we’d just log to everything in tests to stdout. This is fine because Go automatically suppresses output to stdout without an additional -test.v
verbose flag, and because tests ran sequentially, even when testing verbosely the output looked fine, with logs for each test case correctly appearing within their begin/end banners.
But with t.Parallel()
, everything became mixed together into a big log soup:
=== RUN TestClusterCreateRequest/StorageTooSmall
--- PASS: TestClusterCreateRequest (0.00s)
--- PASS: TestClusterCreateRequest/StorageTooSmall (0.00s)
=== CONT TestMultiFactorServiceList
=== RUN TestMultiFactorServiceList/Success
=== RUN TestMultiFactorServiceUpdate/SuccessWebAuthn
time="2023-08-20T22:26:28Z" level=info msg="password_hash_line: Match result: success [account: eee5c815-b7c6-4f19-8e1d-92428eed32ab] [hash time: 0.000496s]" account_id=eee5c815-b7c6-4f19-8e1d-92428eed32ab hash_duration=0.000496s hash_match=true
=== RUN TestClusterServiceDelete/Owl410Gone
=== RUN TestMultiFactorServiceList/Pagination
time="2023-08-20T22:26:28Z" level=info msg="sessionService: password_hash_upgrade_line: Upgraded password from \"argon2id\" to \"argon2id\" [account: eee5c815-b7c6-4f19-8e1d-92428eed32ab] [hash time: 0.000435s]" account_id=eee5c815-b7c6-4f19-8e1d-92428eed32ab new_algorithm=argon2id new_argon2id_memory=1024 new_argon2id_parallelism=4 new_argon2id_time=1 new_hash_duration=0.000435s old_algorithm=argon2id old_hash_iterations=0
=== RUN TestClusterUpgradeServiceCreate/HobbyMaximum100GB
=== RUN TestClusterServiceCreate/WithPostgresVersionID
=== RUN TestMultiFactorServiceUpdate/WrongAccountNotFoundError
=== RUN TestClusterServiceForkCreate/WithTargetTime
--- PASS: TestMultiFactorServiceList (0.01s)
--- PASS: TestMultiFactorServiceList/Success (0.00s)
--- PASS: TestMultiFactorServiceList/Pagination (0.00s)
=== CONT TestClusterServiceActionTailscaleDisconnect
=== RUN TestClusterServiceActionTailscaleDisconnect/Success
time="2023-08-20T22:26:28Z" level=info msg="password_hash_line: Match result: success [account: eee5c815-b7c6-4f19-8e1d-92428eed32ab] [hash time: 0.000828s]" account_id=eee5c815-b7c6-4f19-8e1d-92428eed32ab hash_duration=0.000828s hash_match=true
This isn’t usually a problem because you’re not reading the logs anyway, but quickly becomes one if you get a test failure, and only have senseless noise around it to help you debug.
The fix for this is t.Logf
, which makes sure to collate log output for to the particular test case that emitted it. This will generally require a shim to use with a logging library like:
// tlogWriter is an adapter between Logrus and Go's testing package,
// which lets us send all output to `t.Log` so that it's correctly
// collated with the test that emitted it. This helps especially when
// using parallel testing where output would otherwise be interleaved
// and make debugging extremely difficult.
type tlogWriter struct {
tb testing.TB
}
func (lw *tlogWriter) Write(p []byte) (n int, err error) {
// Unfortunately, even with this call to `t.Helper()` there's no
// way to correctly attribute the log location to where it's
// actually emitted in our code (everything shows up under
// `entry.go`). A good explanation of this problem and possible
// future solutions here:
//
// https://github.com/neilotoole/slogt#deficiency
lw.tb.Helper()
lw.tb.Logf((string)(p))
return len(p), nil
}
Then with Logrus for example:
func Logger(tb testing.TB) *logrus.Entry {
logger := logrus.New()
logger.SetOutput(&tlogWriter{tb})
return logrus.NewEntry(logger)
}
Now when a test fails, any logs it produced are grouped correctly:
--- FAIL: TestSessionServiceCreate (0.05s)
--- FAIL: TestSessionServiceCreate/PasswordHashAlgorithmUpgrade (0.05s)
entry.go:294: time="2023-08-20T22:34:15Z" level=info msg="password_hash_line: Match result: success [account: 81b967f7-4f5c-4ab4-b1d7-3c455db35767] [hash time: 0.000694s]" account_id=81b967f7-4f5c-4ab4-b1d7-3c455db35767 hash_duration=0.000694s hash_match=true
entry.go:294: time="2023-08-20T22:34:15Z" level=info msg="sessionService: password_hash_upgrade_line: Upgraded password from \"argon2id\" to \"argon2id\" [account: 81b967f7-4f5c-4ab4-b1d7-3c455db35767] [hash time: 0.011716s]" account_id=81b967f7-4f5c-4ab4-b1d7-3c455db35767 new_algorithm=argon2id new_argon2id_memory=19456 new_argon2id_parallelism=4 new_argon2id_time=2 new_hash_duration=0.011716s old_algorithm=argon2id old_hash_iterations=0
session_service_test.go:197:
Error Trace: /Users/brandur/Documents/crunchy/platform/server/api/session_service_test.go:197
/Users/brandur/Documents/crunchy/platform/server/api/session_service_test.go:158
Error: artificial failure
Test: TestSessionServiceCreate/PasswordHashAlgorithmUpgrade
Bridges for common loggers like slog are usually available as public packages. Slogt, for example.
Our tests use goleak to detect any accidentally leaked goroutines, a practice that I’d recommend since leaking goroutines without realizing it is easily one of Go’s top footguns.
Previously, we had a pattern in which every test case would check itself for goroutine leaks, but adding t.Parallel()
broke the pattern because test cases running in parallel would detect each other’s goroutines as leaks.
The fix was to use goleak’s built-in TestMain
wrapper:
func TestMain(m *testing.M) {
goleak.VerifyTestMain(m)
}
Leaked goroutines are only detected at package-level granularity, but as long as you’re starting off from a baseline of no leaks, that’s good enough to detect regressions.
By default the paralleltest
lint will not only require that every test case define t.Parallel()
, but that every subtest (i.e. t.Run("Subtest", func(t *testing.T) { ... })
) define it as well. This is generally the right thing to do because it means that parallelism has better granularity and therefor more likely to produce more optimal throughput and lower the total runtime.
Due to a historical tech decision made long ago, we were ubiquitously using a testing convention within test cases where we had plenty of subtests, but subtests were not parallel safe because they were all sharing a single var
block.
Refactoring to total parallel-safety would’ve taken dozens of hours and wasn’t a good use of time, so we declared t.Parallel()
at the granularity of test cases but not subtests to be “good enough”. I added an ignoremissingsubtests
option to paralleltest
to support that, and if your set up is anything like ours, maybe that’ll help you:
linters-settings:
paralleltest:
# Ignore missing calls to `t.Parallel()` in subtests. Top-level
# tests are still required to have `t.Parallel`, but subtests are
# allowed to skip it.
#
# Default: false
ignore-missing-subtests: true
As noted above, it’s not exactly Go convention to make ubiquitous use of t.Parallel()
. That said, it’s reduced our test iteration time for large packages by 30-40%, and that’s enough of a development win that I personally intend to use it for future Go projects.
And although increased test speed is its main benefit, when combined with go test . -race
it’s actually managed to help suss out some tricky parallel safety bugs that weren’t being caught with sequential-only test runs. That’s a big advantage because that whole class of bug is very difficult to debug in production.
Activating t.Parallel()
everywhere for an existing project could be a big deal, but integrating it from the beginning has very little ongoing cost, and might yield substantials benefits later on.
Luke Plant 24/08/2023 | Source: Luke Plant's home page
Most of this post was written a long time ago, but I thought it would be useful to have somewhere public that I can point people to for my answer to this question, so I’m finally publishing it.
At the outset, I need to say that this issue is one that I think Christians should not divide over. The view I present below is not the one I grew up with, but I have no particular ambition to convert people to my view — except that, with regard to those who have the duty to teach God’s word, it is important to do so properly, “rightly handling the word of truth”, preaching the full counsel of God with all His authority, but never giving human ideas that same authority. It is to people with those duties that the following is really directed. The tone of this article should be interpreted with that in mind — my concern is with those who are not rightly teaching scripture (while being aware that I have failed and probably continue to fail in this extremely demanding privilege in many ways).
Before going on — if you are worried about the length of this article, the last two thirds of it actually consists of an appendix containing quotations from the early church, and are not part of the main argument.
For my definition of the concept of a “Christian Sabbath” or “Christian Sabbatarianism”, I will take this quotation from The Westminster Confession:
Chapter 21 VII. As it is of the law of nature, that, in general, a due proportion of time be set apart for the worship of God; so, in his Word, by a positive, moral, and perpetual commandment, binding all men in all ages, he hath particularly appointed one day in seven for a Sabbath, to be kept holy unto him: which, from the beginning of the world to the resurrection of Christ, was the last day of the week; and, from the resurrection of Christ, was changed into the first day of the week, which in Scripture is called the Lord’s Day, and is to be continued to the end of the world as the Christian Sabbath.
We need to ask if the above statement is biblically grounded or not.
First, a principle: in teaching people to obey God, it is a sin to add to the commands that God has given us. We are allowed to go no further than what the Bible itself requires in the demands we place on people, or we come under the condemnation of Jesus (Matthew 15:7-9).
We must teach only what the Bible teaches, and what can and must be deduced from it. As the Westminster Confession puts it so well:
Chapter 1. VI. The whole counsel of God, concerning all things necessary for his own glory, man's salvation, faith, and life, is either expressly set down in Scripture, or by good and necessary consequence may be deduced from Scripture: unto which nothing at any time is to be added, whether by new revelations of the Spirit, or traditions of men.
In other words, we are not free to extrapolate, “read between the lines” or “join up the dots” in any way we please, but must teach all of what Scripture explicitly says and what necessarily flows from it, according to its own logic, and only that.
We note that Scripture may teach by precept, example or implication, but precept is stronger than example, as an example of behaviour found in the Bible could be good, bad, or incidental. Implication can be fairly strong or fairly weak, depending on the details.
I will respond to the claims of the Westminster Confession with a series of questions:
Does the NT ever refer to the Lord’s Day as the Sabbath?
No, it does not.
This is already sufficient reason to not call the Lord’s Day the Sabbath. If the Bible doesn't call it that, it may well have good reasons for not doing so, and we will only succeed in confusing ourselves and biasing our reading of Scripture when we use biblical terminology in unbiblical ways.
To give an illustration:
In charismatic and Pentecostal circles, it is quite common to use the phrase “waiting on the Lord” to mean a kind of meditative, “listening” prayer in which you wait for the Holy Spirit to lead your thoughts directly, and interpret that as the voice of God.
One of the difficulties with this is that, in my view, it is taking a biblical phrase and using it in an unbiblical way — I think “waiting on/for the Lord” in the Bible is actually about trusting God. This produces a feedback loop that is difficult to escape from. Because of how the phrase is used in those circles, every time they read Psalm 130:5,6, Isaiah 40:31 or similar passages, it is firstly assumed that the Bible is talking about their practice of listening in prayer. Those texts then reinforce not just the legitimacy of the practice, but its importance.
When asked for biblical support for their practice, they do point to these texts — despite the fact that the phrases they contain have been interpreted according to their usage of that terminology, rather than actually describing the practice in a clear way. It becomes very difficult for them to believe that listening in prayer is either unbiblical or not as important as they have thought — after all, they know for a fact that they've been encouraged to do that many times in God’s word, even if they can't remember where.
(I’m not saying here that God never leads us via our thoughts when praying, by the way, that’s another issue I’m not getting in to.)
In the same way, if we call the Lord’s Day “the Sabbath”, every time we read the Ten Commandments or many other passages about the Sabbath, we equate “Sabbath” with “the Lord’s Day”, creating a feedback loop that makes it very difficult to even take the non-Sabbatarian view seriously — after all, we know for a fact that God has told us that it is a sin to work on the Lord’s Day, being unaware of the unbiblical interpretative jump our minds have made. I suspect that this is the primary reason that the Sabbatarian position retains a hold over many Christian circles.
And, by the way, as far as I can tell from the records we have, in at least the first 4 centuries, while Christian teachers often mentioned “the Sabbath”, they never used that word to refer to the Lord's Day — see Appendix.
Does the NT ever prescriptively take Sabbath laws and apply them to the Lord’s day (e.g. command people not to work on the Lord’s Day)?
No, it does not.
Does the NT ever descriptively set out a pattern of Christians observing Sabbath regulations on the Lord’s day?
No. We do find Christians worshipping God on the first day of the week. But they worshipped on other days too (Acts 2:46). Whether Christians are required to spend some time worshipping God on the Lord’s Day is a different question to whether the Lord’s Day is the Sabbath. We are certainly never told they avoided labour or recreation on the Lord’s Day, or gave the whole day over to the worship of God.
In Acts 20:7, the disciples there apparently met late at night.
Often it seems they met before dawn on Sunday:
They asserted, however, that the sum and substance of their fault or error had been that they were accustomed to meet on a fixed day before dawn and sing responsively a hymn to Christ as a god
—Pliny
Most likely, the unsociable hours of these meetings were due to the fact that they were working the rest of the time, since Sunday was an official day of work in the Roman Empire until Constantine.
The following is an argument from silence, and is therefore weaker, but I think it has some strength to it:
Had the early church been refusing to work on Sunday, this would have been scandalous, and a more than adequate justification for persecution (at least in the eyes of the persecutors). It seems fairly unlikely historically that if Christians had the practice of taking the whole of Sunday off, that there would be no record of it, especially given documents like Justin Martyr's Apologies, where he defends Christians against the grievances that others had against them.
This leaves the case for Christian Sabbatarianism on very shaky ground, with neither precept nor example to support it.
However, we still want to answer the question “what should we do with the 4th commandment?”. This could potentially provide a case for a Christian Sabbath concept by way of some biblical logic. Answering this question requires looking at both the OT background to the command, and how the NT treats it.
We find:
The Sabbath is not a creation ordinance, despite what some say. Adam was commanded to work, reproduce etc., but there is no command to rest every seven days. The account of God’s creation in six days and rest on the seventh is not made into any kind of pattern in the book of Genesis, and while Genesis 2:3 talks of God blessing the seventh day and making it holy, it doesn’t fill out what that means in terms of a requirement not to work.
There is no record of anyone observing Sabbaths until we come to Moses. (See also quotes from Justyn Martyr and Tertullian below, who said that Abel, Enoch, Noah and Melchizedek did not observe Sabbaths).
The creation-basis for the command in the law of Moses is not a strict copy, but an adaptation based on the pattern. God worked for 6 days, then had an eternal day of rest (there is no “evening and morning the seventh day”). This is then adapted into a weekly cycle with a commandment to cease from labour for the Jews. So we are primed for the idea that the creation principle of rest may be adapted in different ways in the New Covenant.
The Jewish Sabbath is a special sign of God’s covenant with the Jews — see Ezekiel 20:11-12. Clearly God couldn't have said this of moral laws e.g. the command not to murder could not have been called a “sign” between God and the Jews, since it was common to Jews and the rest of the world.
On this basis, it seems very unlikely that the Jewish Sabbath is part of the moral law that all the world must obey. The Westminster Confession does not have an adequate biblical basis for saying that God appointed one day in seven to be kept holy by all people “from the beginning of the world”.
Does the NT ever speak directly on the issue of how laws about Sabbath or special day observance are to be handled by Christians? Thankfully, it does:
Colossians 2:16-17, Galatians 4:10, Romans 14:5.
These texts are clear, and do not require exegetical somersaults to understand, once Sabbatarian glasses have been removed:
The Sabbath is, like other OT ceremonies, a shadow that is fulfilled in Christ.
We are at liberty to observe special days if we want to, but not to require other people to do so.
In the NT, there are no holy things or places or days, only a holy people.
The Sabbath is fulfilled for Christians by the eternal rest of the new heavens and new earth. Christians enter that now in spiritual ways, by resting in Christ, and ceasing their attempts to gain acceptance by works.
Negatively, the council of Jerusalem is also deafening by its silence on the issue. It specifically discusses the points where Jewish law impinged upon Gentile consciences. Sabbath observance was not practised among Gentiles, so I think the silence of Acts 15 on this matter is rather difficult to explain if the apostles believed that Sabbath observance was necessary for Gentiles and had been moved to the Lord’s Day after Christ’s resurrection, as claimed by the Westminster Confession.
OT and NT point unambiguously in the same direction. Other texts that are sometimes quoted (“The Son of Man is Lord of the Sabbath” etc.) simply don't address the question (unless you have made the unbiblical equation of “the Sabbath” equals “the Lord’s Day”, in which case seeing these texts clearly will require a fair amount of un-thinking).
One text which is sometimes used to support the universality of the Sabbath is Mark 2:27, “The Sabbath was made for man, not man for the Sabbath”. This argument depends firstly on translating ‘anthropos’ as ‘man’, and then understanding ‘man’ to mean ‘all mankind’. It is perfectly possible to translate ‘anthropos’ as ‘people’ (as per the NET translation) or understand it in that way, and the argument then disappears. If I said “The Highway Code was made for people, not people for the Highway Code”, I would not be implying that wherever there are people, there is the Highway Code, and it will always be that way, time without end. In fact I would more likely be implying the opposite — the Highway Code is an invention that serves human needs, and can be adjusted or abandoned if necessary. The question is then: which meaning is more appropriate for this text? Given the OT history, which gives no hint of Sabbath observance for all mankind, either by precept or example, with the Sabbath being introduced by Moses and understood as a specific sign of God’s covenant with the nation of Israel (as above), it seems far more appropriate to understand this text as meaning simply “the Sabbath was made for people” — and not as a statement of the universality of the Sabbath.
There remains one argument I know of in favour of Christian Sabbatarianism: the Sabbath is part of the Ten Commandments, which are God’s eternal moral law, and therefore must continue.
First, in response, the Bible never states that the Ten Commandments are God’s eternal moral law. The division of the law into moral, civil and ceremonial, while useful, is not strictly biblical, and must always be subject to what the Bible actually says. The NT texts on the Sabbath make it clear that the Jewish form of the Sabbath (one day in seven rest) is ceremonial. We must not allow the systems that we have extracted from scripture (or think we have) to override plain exegesis. It is infinitely better to have holes, even gaping holes, in our systematic theology, than to handle the Bible in such a way that we override or ignore just one of God's holy words.
The argument that the Sabbath is part of God’s eternal moral law reminds me of the proof that 2 is an odd number. It goes like this:
Consider the prime numbers. They are, by definition, positive integers that are divisible only by 1 and themselves. The sequence starts: 2, 3, 5, 7, 11, 13, 17, 19, …
As you’ll notice, they are all odd numbers — look at them: 3 is odd, 5 is odd, 7 is odd, 9 is odd. All the prime numbers are odd.
– “Excuse me, what about 2? That looks even to me…”
– We don’t talk about 2. (I’ll see you after class).
As I was saying, all the prime numbers are odd.
The number 2 is a prime number.
Therefore, 2 is an odd number. QED.
The proof that the Sabbath is an eternal moral command looks the same:
All the Ten Commandments are God’s eternal moral law.
Look at them: “Do not murder” – a moral command that existed before Moses, and is repeated in the NT. And so it is with all of them – “You shall have no other Gods before me”, “Do not commit adultery” etc.
– “Excuse me, what about the 4th commandment? It seems pretty clear that the Sabbath was given specially to the Jews as a covenant sign, and the NT tells us that we don’t have holy days any more because they are fulfilled in Christ…”
– We don’t talk about the 4th commandment. And please don’t interrupt.
As you can see, all of the Ten Commandments are God’s eternal moral law.
The Sabbath law is part of the Ten Commandments
Therefore the Sabbath is an eternal moral command. And we celebrate it on Sundays, obviously.
Even if we were to conclude the Sabbath is a moral command and must continue, we're not free to make up how it should continue. The NT actually gives us no ground for saying the Jewish Sabbath has been moved to the Lord’s Day. We would be left saying that it continues just as it is in the OT (producing many difficulties which I won't go into) — or, it continues and applies in the New Covenant age in the way described in Hebrews, that is, in a spiritual way as above (in other words, a long way round to the non-Sabbatarian position).
In fact, the NT is clear that the command is fulfilled in Christ just as other ceremonial commands are. We're not left in the dark about how to understand it. If we attempt to put observance of the Lord's Day as a Sabbath into a moral category, we produce an impossible situation when it comes treating people who fail to observe it. For matters of plain morality, we are required by scripture to judge people, to the extent of putting them out of the church and not even keeping company with them — “expel the wicked person” (1 Corinthians 5). When it comes to observing holy days, Romans 14 tells us that we must not judge each other, but rather accept one another (v1, 5, 13). To claim, as some do, that Romans 14 is talking about sacred days apart from the Lord's Day is simply special pleading, as there is no basis for saying so. This is a simple reductio ad absurdum that shows we erred when making literal Sabbath day observance a NT obligation. Rather than it being logically inescapable that the Lord's Day is to be observed as a Sabbath — which is the standard required for us to teach other people to so observe it — the reverse is closer to the truth.
To conclude the argument from Scripture:
The idea stated in the Westminster Confession that the Lord’s Day is to be the Sabbath from the resurrection of Christ to the end of the world cannot be found in Scripture, just as its statement about Sabbath observance “from the beginning of the world” is also insufficiently supported by the Bible. There are no statements whatsoever supporting the claim that the Sabbath must be observed on the Lord’s Day to this effect, either by precept, example or implication. If this idea comes from the Bible at all, it only does so by one possible extrapolation among several, and not by “good and necessary consequence”, which is the standard any teaching must pass before it can be taught from our pulpits. Further, it is an extrapolation that contradicts how the Bible itself handles the subject.
However:
The principle behind the need to set time aside to worship God can certainly apply to how we use Sunday (as well as other times in the week), especially if we have the freedom to use Sunday in a way that we choose. We also have the freedom as believers to “observe” the Lord’s day if we want to, whatever we mean by that — but not to put that requirement onto others (Romans 14:5-6). There is also the pattern that NT believers have handed on of meeting together on the Lord's Day, and the commandment in Hebrews 10:25 to not forsake meeting together, which also mean that for most people, setting aside time to meet with God's people on Sunday must be a high priority.
For myself, with my work situation meaning that I have the freedom to rest on a Sunday (when I'm not preaching), I've found it an enormously helpful practice, and one that I commend to everyone. In fact, I would be suspicious of myself and my walk with God if I was preferring to do other things on the Lord’s Day — I've got the other days of the week when I can work. My practice has changed relatively little since I've come to a non-Sabbatarian position. But making this a binding rule on others, or even on myself, is not something that Scripture allows me to do.
There is also the principle of “rest”, which is big topic and it’s not my purpose to look at it in this post. While I couldn’t agree with every word of it, I found Tim Keller’s sermon on Work and Rest to be really helpful.
While it is Scripture and Scripture alone that settles the matter, the Early Church is also of interest. To diagnose our own blind spots it is often helpful to look to what the Church has historically believed. The earlier you go, the less likely it is, in general, that waters are muddied by traditions of men that have been added.
(UPDATE 2023-09-11) In addition, correct interpretation of some of the key texts mentioned above has often been overridden on the basis of historical claims that turn out not to be true. One example of this was furnished by a commenter below, whose website quotes from Wilhelmus à Brakel:
Secondly, it is a well-known truth that the apostles commanded the churches everywhere to observe the Lord’s day (refer to the above). It is common knowledge that there was neither any contention concerning that day, nor was there any intent to force or eradicate the observance of this day contrary to the wishes of the apostles.
I’ve heard this argument many times; it misled me in the past and continues to mislead people today. So I’m indebted to the person commenting below for providing a good example of it!
What is presented above as well-known truth is in fact false, or at best obscuring the truth. While the practice of meeting together on Sunday to celebrate the resurrection was indeed a widespread tradition that originated from earliest times, to call it “observing” the day, or claim the apostles commanded “observance” in the sense needed for a Sabbatarian view, is directly opposed to the evidence we have. (end update).
I have not been able to find any evidence of Christian Sabbatarianism at all in the first two centuries. Many sources suggest some Christians continued to observe the Jewish Sabbath (i.e. Saturday) for centuries, but I haven't yet found an early source for that.
In general, the sources describe the practice of Christians meeting together on the Lord’s Day as being pretty much universal, but without making it a Sabbath day.
Origen in 220 AD is the first to say that the Lord’s Day should be observed as a day of rest, but he seems to be out of line with most people of his time, who made no such rules.
Very clear quotes on the subject from early Christians, including early believers like Justyn Martyr, and authorities like Tertullian and Augustine etc. can be found at http://www.bible.ca/H-sunday.htm and are copied below.
They are quite explicit about Christians not observing the Sabbath, and not being required to — and in fact you are overthrowing the gospel if you do (Chrysostom)! The word Sabbath is used exclusively of Jewish holy days, or in a strictly spiritual sense that doesn’t involve obeying any Sabbath-day regulations, but rather resting in the gospel and living in general holiness of life.
Where they talk about Christians “observing” the Lord’s Day (which mostly starts from about 3rd/4th century), it is as a contrast to observing the Sabbath, the main requirement being that Christians be joyful and that they meet together, and not that they refrain from any activity — which is called Jewish superstition and idleness.
Put together, they present overwhelming evidence that there is not a hint of a “Christian Sabbath” tradition (that fits with the Westminster Confession’s idea of what such as day is like) that was passed down from the apostles.
Justin Martyr is worth looking at in some detail:
This is a report of a long debate with some Jews, in which the subject of Sabbath and circumcision comes up several times. It's extremely clear that Justin Martyr did not consider Christians to be bound to observe the Sabbath or sabbath days, and had an understanding of the Sabbath exactly in line with what I have written above, often with the same proof texts.
And when they ceased, I again addressed them thus:—
“Is there any other matter, my friends, in which we are blamed, than this, that we live not after the law, and are not circumcised in the flesh as your forefathers were, and do not observe sabbaths as you do?
Trypho:
But this is what we are most at a loss about: that you, professing to be pious, and supposing yourselves better than others, are not in any particular separated from them, and do not alter your mode of living from the nations, in that you observe no festivals or sabbaths, and do not have the rite of circumcision; and further, resting your hopes on a man that was crucified, you yet expect to obtain some good thing from God, while you do not obey His commandments.
Justin Martyr:
I also adduced another passage in which Isaiah exclaims: “ ‘Hear My words, and your soul shall live; and I will make an everlasting covenant with you, even the sure mercies of David. Behold, I have given Him for a witness to the people: nations which know not Thee shall call on Thee; peoples who know not Thee shall escape to Thee, because of thy God, the Holy One of Israel; for He has glorified Thee.’ This same law you have despised, and His new holy covenant you have slighted; and now you neither receive it, nor repent of your evil deeds. ‘For your ears are closed, your eyes are blinded, and the heart is hardened,’ Jeremiah has cried; yet not even then do you listen. The Lawgiver is present, yet you do not see Him; to the poor the Gospel is preached, the blind see, yet you do not understand. You have now need of a second circumcision, though you glory greatly in the flesh. The new law requires you to keep perpetual sabbath, and you, because you are idle for one day, suppose you are pious, not discerning why this has been commanded you: and if you eat unleavened bread, you say the will of God has been fulfilled. The Lord our God does not take pleasure in such observances: if there is any perjured person or a thief among you, let him cease to be so; if any adulterer, let him repent; then he has kept the sweet and true sabbaths of God. If any one has impure hands, let him wash and be pure.
“For since you have read, O Trypho, as you yourself admitted, the doctrines taught by our Saviour, I do not think that I have done foolishly in adding some short utterances of His to the prophetic statements. Wash therefore, and be now clean, and put away iniquity from your souls, as God bids you be washed in this laver, and be circumcised with the true circumcision. For we too would observe the fleshly circumcision, and the Sabbaths, and in short all the feasts, if we did not know for what reason they were enjoined you,—namely, on account of your transgressions and the hardness of your hearts. For if we patiently endure all things contrived against us by wicked men and demons, so that even amid cruelties unutterable, death and torments, we pray for mercy to those who inflict such things upon us, and do not wish to give the least retort to any one, even as the new Lawgiver commanded us: how is it, Trypho, that we would not observe those rites which do not harm us, —I speak of fleshly circumcision, and Sabbaths, and feasts?
Therefore to you alone this circumcision was necessary, in order that the people may be no people, and the nation no nation; as also Hosea, one of the twelve prophets, declares. Moreover, all those righteous men already mentioned [Abel, Enoch, Noah, Melchizedek], though they kept no Sabbaths, were pleasing to God; and after them Abraham with all his descendants until Moses, under whom your nation appeared unrighteous and ungrateful to God, making a calf in the wilderness: wherefore God, accommodating Himself to that nation, enjoined them also to offer sacrifices, as if to His name, in order that you might not serve idols. Which precept, however, you have not observed; nay, you sacrificed your children to demons. And you were commanded to keep Sabbaths, that you might retain the memorial of God. For His word makes this announcement, saying, ‘That ye may know that I am God who redeemed you.’
“Moreover, that God enjoined you to keep the Sabbath, and impose on you other precepts for a sign, as I have already said, on account of your unrighteousness, and that of your fathers,—as He declares that for the sake of the nations, lest His name be profaned among them, therefore He permitted some of you to remain alive,—these words of His can prove to you: they are narrated by Ezekiel thus: ‘I am the Lord your God; walk in My statutes, and keep My judgements, and take no part in the customs of Egypt; and hallow My Sabbaths; and they shall be a sign between Me and you, that ye may know that I am the Lord your God. Notwithstanding ye rebelled against Me, and your children walked not in My statutes, neither kept My judgements to do them: which if a man do, he shall live in them. But they polluted My Sabbaths. And I said that I would pour out My fury upon them in the wilderness, to accomplish My anger upon them; yet I did it not; that My name might not be altogether profaned in the sight of the heathen. I led them out before their eyes, and I lifted up Mine hand unto them in the wilderness, that I would scatter them among the heathen, and disperse them through the countries; because they had not executed My judgements, but had despised My statutes, and polluted My Sabbaths, and their eyes were after the devices of their fathers. Wherefore I gave them also statutes which were not good, and judgements whereby they shall not live. And I shall pollute them in their own gifts, that I may destroy all that openeth the womb, when I pass through them.’
I also came across this work, dating from AD 130 to the end of the century, which is relevant for its general tenor:
Chapter IV.—The other observances of the Jews.
But as to their scrupulosity concerning meats, and their superstition as respects the Sabbaths, and their boasting about circumcision, and their fancies about fasting and the new moons, which are utterly ridiculous and unworthy of notice,—I do not think that you require to learn anything from me.
Chapter V.—The manners of the Christians.
For the Christians are distinguished from other men neither by country, nor language, nor the customs which they observe. For they neither inhabit cities of their own, nor employ a peculiar form of speech, nor lead a life which is marked out by any singularity. The course of conduct which they follow has not been devised by any speculation or deliberation of inquisitive men; nor do they, like some, proclaim themselves the advocates of any merely human doctrines. But, inhabiting Greek as well as barbarian cities, according as the lot of each of them has determined, and following the customs of the natives in respect to clothing, food, and the rest of their ordinary conduct, they display to us their wonderful and confessedly striking method of life. They dwell in their own countries, but simply as sojourners. As citizens, they share in all things with others, and yet endure all things as if foreigners. Every foreign land is to them as their native country, and every land of their birth as a land of strangers. They marry, as do all [others]; they beget children; but they do not destroy their offspring. They have a common table, but not a common bed. They are in the flesh, but they do not live after the flesh. They pass their days on earth, but they are citizens of heaven. They obey the prescribed laws, and at the same time surpass the laws by their lives. They love all men, and are persecuted by all. They are unknown and condemned; they are put to death, and restored to life. They are poor, yet make many rich; they are in lack of all things, and yet abound in all; they are dishonoured, and yet in their very dishonour are glorified. They are evil spoken of, and yet are justified; they are reviled, and bless; they are insulted, and repay the insult with honour; they do good, yet are punished as evil-doers.
The following are taken verbatim (including comments) from http://www.bible.ca/H-sunday.htm . I have checked the accuracy of some, but not most of them.
90AD DIDACHE: "Christian Assembly on the Lord’s Day: 1. But every Lord’s day do ye gather yourselves together, and break bread, and give thanksgiving after having confessed your transgressions, that your sacrifice may be pure. 2. But let no one that is at variance with his fellow come together with you, until they be reconciled, that your sacrifice may not be profaned. 3. For this is that which was spoken by the Lord: In every place and time offer to me a pure sacrifice; for I am a great King, saith the Lord, and my name is wonderful among the nations." (Didache: The Teaching of the Twelve Apostles, Chapter XIV)
100 AD BARNABAS "We keep the eighth day [Sunday] with joyfulness, the day also on which Jesus rose again from the dead" (The Epistle of Barnabas, 100 AD 15:6-8).
100 AD BARNABAS: Moreover God says to the Jews, 'Your new moons and Sabbaths 1 cannot endure.' You see how he says, 'The present Sabbaths are not acceptable to me, but the Sabbath which I have made in which, when I have rested [heaven: Heb 4] from all things, I will make the beginning of the eighth day which is the beginning of another world.' Wherefore we Christians keep the eighth day for joy, on which also Jesus arose from the dead and when he appeared ascended into heaven. (15:8f, The Epistle of Barnabas, 100 AD, Ante-Nicene Fathers , vol. 1, pg. 147)
110AD Pliny: "they were in the habit of meeting on a certain fixed day before it was light, when they sang in alternate verses a hymn to Christ, as to a god, and bound themselves by a solemn oath not to (do) any wicked deeds, never to commit any fraud, theft, or adultery, never to falsify their word, nor deny a trust when they should be called upon to deliver it up; after which it was their custom to separate, and then reassemble to partake of good food—but food of an ordinary and innocent kind."
150AD EPISTLE OF THE APOSTLES.- I [Christ] have come into being on the eighth day which is the day of the Lord. (18)
150AD JUSTIN: "He then speaks of those Gentiles, namely us, who in every place offer sacrifices to Him, i.e., the bread of the Eucharist, and also the cup of the Eucharist, affirming both that we glorify His name, and that you profane [it]. The command of circumcision, again, bidding [them] always circumcise the children on the eighth day, was a type of the true circumcision, by which we are circumcised from deceit and iniquity through Him who rose from the dead on the first day after the Sabbath, [namely through] our Lord Jesus Christ. For the first day after the Sabbath, remaining the first of all the days, is called, however, the eighth, according to the number of all the days of the cycle, and [yet] remains the first.". (Justin, Dialogue 41:4)
150AD JUSTIN: …those who have persecuted and do persecute Christ, if they do not repent, shall not inherit anything on the holy mountain. But the Gentiles, who have believed on Him, and have repented of the sins which they have committed, they shall receive the inheritance along with the patriarchs and the prophets, and the just men who are descended from Jacob, even although they neither keep the Sabbath, nor are circumcised, nor observe the feasts. Assuredly they shall receive the holy inheritance of God. (Dialogue With Trypho the Jew, 150-165 AD, Ante-Nicene Fathers, vol. 1, page 207)
150AD JUSTIN: But if we do not admit this, we shall be liable to fall into foolish opinion, as if it were not the same God who existed in the times of Enoch and all the rest, who neither were circumcised after the flesh, nor observed Sabbaths, nor any other rites, seeing that Moses enjoined such observances… For if there was no need of circumcision before Abraham, or of the observance of Sabbaths, of feasts and sacrifices, before Moses; no more need is there of them now, after that, according to the will of God, Jesus Christ the Son of God has been born without sin, of a virgin sprung from the stock of Abraham. (Dialogue With Trypho the Jew, 150-165 AD, Ante-Nicene Fathers , vol. 1, page 206)
150AD JUSTIN: "And on the day called Sunday, all who live in cities or in the country gather together to one place, and the memoirs of the apostles or the writings of the prophets are read, as long as time permits; then, when the reader has ceased, the president verbally instructs, and exhorts to the imitation of these good things. Then we all rise together and pray, and, as we before said, when our prayer is ended, bread and wine and water are brought, and the president in like manner offers prayers and thanksgivings, according to his ability, and the people assent, saying Amen; and there is a distribution to each, and a participation of that over which thanks have been given, and to those who are absent a portion is sent by the deacons. And they who are well to do, and willing, give what each thinks fit; and what is collected is deposited with the president, who succours the orphans and widows and those who, through sickness or any other cause, are in want, and those who are in bonds and the strangers sojourning among us, and in a word takes care of all who are in need. But Sunday is the day on which we all hold our common assembly, because it is the first day on which God, having wrought a change in the darkness and matter, made the world; and Jesus Christ our Saviour on the same day rose from the dead. For He was crucified on the day before that of Saturn (Saturday); and on the day after that of Saturn, which is the day of the Sun, having appeared to His apostles and disciples, He taught them these things, which we have submitted to you also for your consideration." (First apology of Justin, Weekly Worship of the Christians, Ch 68)
150AD JUSTIN: Moreover, all those righteous men already mentioned [after mentioning Adam. Abel, Enoch, Lot, Noah, Melchizedek, and Abraham], though they kept no Sabbaths, were pleasing to God; and after them Abraham with all his descendants until Moses… And you [fleshly Jews] were commanded to keep Sabbaths, that you might retain the memorial of God. For His word makes this announcement, saying, "That you may know that I am God who redeemed you." (Dialogue With Trypho the Jew, 150-165 AD, Ante-Nicene Fathers , vol. 1, page 204)
150AD JUSTIN: There is no other thing for which you blame us, my friends, is there than this? That we do not live according to the Law, nor, are we circumcised in the flesh as your forefathers, nor do we observe the Sabbath as you do. (Dialogue with Trypho 10:1. In verse 3 the Jew Trypho acknowledges that Christians 'do not keep the Sabbath.')
150AD JUSTIN: We are always together with one another. And for all the things with which we are supplied we bless the Maker of all through his Son Jesus Christ and through his Holy Spirit. And on the day called Sunday there is a gathering together in the same place of all who live in a city or a rural district. [There follows an account of a Christian worship service, which is quoted in VII.2.] We all make our assembly in common on the day of the Sun, since it is the first day, on which God changed the darkness and matter and made the world, and Jesus Christ our Savior arose from the dead on the same day. For they crucified him on the day before Saturn's day, and on the day after (which is the day of the Sun the appeared to his apostles and taught his disciples these things. (Apology, 1, 67:1-3, 7; First Apology, 145 AD, Ante-Nicene Fathers , Vol. 1, pg. 186)
155 AD Justin Martyr "[W]e too would observe the fleshly circumcision, and the Sabbaths, and in short all the feasts, if we did not know for what reason they were enjoined [on] you–namely, on account of your transgressions and the hardness of your heart. . . . [H]ow is it, Trypho, that we would not observe those rites which do not harm us–I speak of fleshly circumcision and Sabbaths and feasts? . . . God enjoined you [Jews] to keep the Sabbath, and impose on you other precepts for a sign, as I have already said, on account of your unrighteousness and that of your fathers" (Dialogue with Trypho the Jew 18, 21).
180AD ACTS OF PETER.- Paul had often contended with the Jewish teachers and had confuted them, saying 'it is Christ on whom your fathers laid hands. He abolished their Sabbath and fasts and festivals and circumcision.' (1: I)-2
190AD CLEMENT OF ALEXANDRIA: (in commenting on each of the Ten Commandments and their Christian meaning:) The seventh day is proclaimed a day of rest, preparing by abstention from evil for the Primal day, our true rest. (Ibid. VII. xvi. 138.1)
190AD CLEMENT OF ALEXANDRIA: He does the commandment according to the Gospel and keeps the Lord’s day, whenever he puts away an evil mind . . . glorifying the Lord’s resurrection in himself. (Ibid. Vii.xii.76.4)
190AD CLEMENT OF ALEXANDRIA: Plato prophetically speaks of the Lord’s day in the tenth book of the Republic, in these words: 'And when seven days have passed to each of them in the meadow, on the eighth they must go on." (Miscellanies V.xiv.106.2)
200AD BARDESANES: Wherever we are, we are all called after the one name of Christ Christians. On one day, the first of the week, we assemble ourselves together (On Fate)
200AD TERTULLIAN: "We solemnize the day after Saturday in contradistinction to those who call this day their Sabbath" (Tertullian's Apology, Ch 16)
200AD TERTULLIAN: It follows, accordingly, that, in so far as the abolition of carnal circumcision and of the old law is demonstrated as having been consummated at its specific times, so also the observance of the Sabbath is demonstrated to have been temporary. (An Answer to the Jews 4:1, Ante-Nicene Fathers Vol. 3, page 155)
200AD TERTULLIAN: Let him who contends that the Sabbath is still to be observed a balm of salvation, and circumcision on the eighth day because of threat of death, teach us that in earliest times righteous men kept Sabbath or practiced circumcision, and so were made friends of God. .. …Therefore, since God originated Adam uncircumcised, and inobservant of the Sabbath, consequently his offspring also, Abel, offering Him sacrifices, uncircumcised and inobservant of the Sabbath, was by Him commended… Noah also, uncircumcised - yes, and inobservant of the Sabbath - God freed from the deluge. For Enoch, too, most righteous man, uncircumcised and inobservant of the Sabbath, He translated from this world… Melchizedek also, "the priest of most high God," uncircumcised and inobservant of the Sabbath, was chosen to the priesthood of God. (An Answer to the Jews 2:10; 4:1, Ante-Nicene Fathers Vol. 3, page 153)
200AD TERTULLIAN: Others . . . suppose that the sun is the god of the Christians, because it is well-known that we regard Sunday as a day of joy. (To the Nations 1: 133)
200AD TERTULLIAN: To us Sabbaths are foreign. (On Idolatry, 14:6)
220AD ORIGEN "On Sunday none of the actions of the world should be done. If then, you abstain from all the works of this world and keep yourselves free for spiritual things, go to church, listen to the readings and divine homilies, meditate on heavenly things. (Homil. 23 in Numeros 4, PG 12:749)
220 AD Origen "Hence it is not possible that the [day of] rest after the Sabbath should have come into existence from the seventh [day] of our God. On the contrary, it is our Savior who, after the pattern of his own rest, caused us to be made in the likeness of his death, and hence also of his resurrection" (Commentary on John 2:28).
225 AD The Didascalia "The apostles further appointed: On the first day of the week let there be service, and the reading of the Holy Scriptures, and the oblation, because on the first day of the week our Lord rose from the place of the dead, and on the first day of the week he arose upon the world, and on the first day of the week he ascended up to heaven, and on the first day of the week he will appear at last with the angels of heaven" (Didascalia 2).
250AD CYPRIAN: The eight day, that is, the first day after the Sabbath, and the Lord’s Day." (Epistle 58, Sec 4)
250 AD IGNATIUS: "If, therefore, those who were brought up in the ancient order of things have come to the possession of a new hope, no longer observing the Sabbath, but living in the observance of the Lord’s Day, on which also our life has sprung up again by Him and by His death-whom some deny, by which mystery we have obtained faith, and therefore endure, that we may be found the disciples of Jesus Christ, our only Master-how shall we be able to live apart from Him, whose disciples the prophets themselves in the Spirit did wait for Him as their Teacher? And therefore He whom they rightly waited for, being come, raised them from the dead. If, then, those who were conversant with the ancient Scriptures came to newness of hope, expecting the coming of Christ, as the Lord teaches us when He says, "If ye had believed Moses, ye would have believed Me, for he wrote of Me; " and again, "Your father Abraham rejoiced to see My day, and he saw it, and was glad; for before Abraham was, I am; " how shall we be able to live without Him? The prophets were His servants, and foresaw Him by the Spirit, and waited for Him as their Teacher, and expected Him as their Lord and Saviour, saying, "He will come and save us." Let us therefore no longer keep the Sabbath after the Jewish manner, and rejoice in days of idleness; for "he that does not work, let him not eat." For say the [holy] oracles, "In the sweat of thy face shalt thou eat thy bread." But let every one of you keep the Sabbath after a spiritual manner, rejoicing in meditation on the law, not in relaxation of the body, admiring the workmanship of God, and not eating things prepared the day before, nor using lukewarm drinks, and walking within a prescribed space, nor finding delight in dancing and plaudits which have no sense in them. And after the observance of the Sabbath, let every friend of Christ keep the Lord’s Day as a festival, the resurrection-day, the queen and chief of all the days [of the week]. Looking forward to this, the prophet declared, "To the end, for the eighth day," on which our life both sprang up again, and the victory over death was obtained in Christ, whom the children of perdition, the enemies of the Saviour, deny, "whose god is their belly, who mind earthly things," who are "lovers of pleasure, and not lovers of God, having a form of godliness, but denying the power thereof." These make merchandise of Christ, corrupting His word, and giving up Jesus to sale: they are corrupters of women, and covetous of other men's possessions, swallowing up wealth insatiably; from whom may ye be delivered by the mercy of God through our Lord Jesus Christ! (Epistle of Ignatius to the Magnesians, Chapter IX)
250AD IGNATIUS: "On the day of the preparation, then, at the third hour, He received the sentence from Pilate, the Father permitting that to happen; at the sixth hour He was crucified; at the ninth hour He gave up the ghost; and before sunset He was buried. During the Sabbath He continued under the earth in the tomb in which Joseph of Arimathaea had laid Him. At the dawning of the Lord’s day He arose from the dead, according to what was spoken by Himself, "As Jonah was three days and three nights in the whale's belly, so shall the Son of man also be three days and three nights in the heart of the earth." The day of the preparation, then, comprises the passion; the Sabbath embraces the burial; the Lord’s Day contains the resurrection." (The Epistle of Ignatius to the Trallians, chapter 9)
250AD IGNATIUS: If any one fasts on the Lord’s Day or on the Sabbath, except on the paschal Sabbath only, he is a murderer of Christ. (The Epistle of Ignatius to the Philippians, chapter 8)
250AD IGNATIUS: "This [custom], of not bending the knee upon Sunday, is a symbol of the resurrection, through which we have been set free, by the grace of Christ, from sins, and from death, which has been put to death under Him. Now this custom took its rise from apostolic times, as the blessed Irenaeus, the martyr and bishop of Lyons, declares in his treatise On Easter, in which he makes mention of Pentecost also; upon which [feast] we do not bend the knee, because it is of equal significance with the Lord’s day, for the reason already alleged concerning it." (Ignatius, Fragments)
300 AD Victorinus "The sixth day [Friday] is called parasceve, that is to say, the preparation of the kingdom. . . . On this day also, on account of the passion of the Lord Jesus Christ, we make either a station to God or a fast. On the seventh day he rested from all his works, and blessed it, and sanctified it. On the former day we are accustomed to fast rigorously, that on the Lord’s day we may go forth to our bread with giving of thanks. And let the parasceve become a rigorous fast, lest we should appear to observe any Sabbath with the Jews . . . which Sabbath he [Christ] in his body abolished" (The Creation of the World).
300AD EUSEBIUS: "They did not, therefore, regard circumcision, nor observe the Sabbath neither do we; … because such things as these do not belong to Christians" (Ecc. Hist., Book 1, Ch. 4)
300AD EUSEBIUS: [The Ebionites] were accustomed to observe the Sabbath and other Jewish customs but on the Lord’s days to celebrate the same practices as we in remembrance of the resurrection of the Savior. (Church History Ill.xxvii.5)
300 AD Eusebius of Caesarea "They [the pre- Mosaic saints of the Old Testament] did not care about circumcision of the body, neither do we [Christians]. They did not care about observing Sabbaths, nor do we. They did not avoid certain kinds of food, neither did they regard the other distinctions which Moses first delivered to their posterity to be observed as symbols; nor do Christians of the present day do such things" (Church History 1:4:8).
300 AD Eusebius of Caesarea "The day of his [Christ's] light . . . was the day of his resurrection from the dead, which they say, as being the one and only truly holy day and the Lord’s day, is better than any number of days as we ordinarily understand them, and better than the days set apart by the Mosaic Law for feasts, new moons, and Sabbaths, which the Apostle [Paul] teaches are the shadow of days and not days in reality" (Proof of the Gospel 4:16:186).
345 AD Athanasius "The Sabbath was the end of the first creation, the Lord’s day was the beginning of the second, in which he renewed and restored the old in the same way as he prescribed that they should formerly observe the Sabbath as a memorial of the end of the first things, so we honor the Lord’s day as being the memorial of the new creation" (On Sabbath and Circumcision 3).
350 AD APOSTOLIC CONSTITUTIONS: Be not careless of yourselves, neither deprive your Saviour of His own members, neither divide His body nor disperse His members, neither prefer the occasions of this life to the word of God; but assemble yourselves together every day, morning and evening, singing psalms and praying in the Lord’s house: in the morning saying the sixty-second Psalm, and in the evening the hundred and fortieth, but principally on the Sabbath-day. And on the day of our Lord’s resurrection, which is the Lord’s day, meet more diligently, sending praise to God that made the universe by Jesus, and sent Him to us, and condescended to let Him suffer, and raised Him from the dead. Otherwise what apology will he make to God who does not assemble on that day to hear the saving word concerning the resurrection, on which we pray thrice standing in memory of Him who arose in three days, in which is performed the reading of the prophets, the preaching of the Gospel, the oblation of the sacrifice, the gift of the holy food? (Constitutions of the Holy Apostles, book 2)
350 AD APOSTOLIC CONSTITUTIONS: For if the Gentiles every day, when they arise from sleep, run to their idols to worship them, and before all their work and all their labors do first of all pray to them, and in their feasts and in their solemnities do not keep away, but attend upon them; and not only those upon the place, but those living far distant do the same; and in their public shows all come together, as into a synagogue: in the same manner those which are vainly called Jews, when they have worked six days, on the seventh day rest, and come together in their synagogue, never leaving or neglecting either rest from labor or assembling together… If, therefore, those who are not saved frequently assemble together for such purposes as do not profit them, what apology wilt thou make to the Lord God who forsakes his Church, not imitating so much as the heathen, but by such, thy absence grows slothful, or turns apostate. or acts wickedness? To whom the Lord says to Jeremiah, "Ye have not kept My ordinances; nay, you have not walked according to the ordinance of the heathen and you have in a manner exceeded them… How, therefore, will any one make his apology who has despised or absented himself from the church of God? (Constitutions of the Holy Apostles, book 2)
350 AD APOSTOLIC CONSTITUTIONS: Do you therefore fast, and ask your petitions of God. We enjoin you to fast every fourth day of the week, and every day of the preparation, and the surplusage of your fast bestow upon the needy; every Sabbath-day excepting one, and every Lord’s day, hold your solemn assemblies, and rejoice: for he will be guilty of sin who fasts on the Lord’s day, being the day of the resurrection, or during the time of Pentecost, or, in general, who is sad on a festival day to the Lord For on them we ought to rejoice, and not to mourn. (Constitutions of the Holy Apostles, book 5)
350 AD APOSTOLIC CONSTITUTIONS "Which Days of the Week We are to Fast, and Which Not, and for What Reasons: But let not your fasts be with the hypocrites; for they fast on the second and fifth days of the week. But do you either fast the entire five days, or on the fourth day of the week, and on the day of the Preparation, because on the fourth day the condemnation went out against the Lord, Judas then promising to betray Him for money; and you must fast on the day of the Preparation, because on that day the Lord suffered the death of the cross under Pontius Pilate. But keep the Sabbath, and the Lord’s day festival; because the former is the memorial of the creation, and the latter of the resurrection. But there is one only Sabbath to be observed by you in the whole year, which is that of our Lord’s burial, on which men ought to keep a fast, but not a festival. For inasmuch as the Creator was then under the earth, the sorrow for Him is more forcible than the joy for the creation; for the Creator is more honourable by nature and dignity than His own creatures." (Constitutions of the Holy Apostles, book 7)
350 AD APOSTOLIC CONSTITUTIONS "How We Ought to Assemble Together, and to Celebrate the Festival Day of Our Saviour's Resurrection. On the day of the resurrection of the Lord, that is, the Lord’s day, assemble yourselves together, without fail, giving thanks to God, and praising Him for those mercies God has bestowed upon you through Christ, and has delivered you from ignorance, error, and bondage, that your sacrifice may be unspotted, and acceptable to God, who has said concerning His universal Church: "In every place shall incense and a pure sacrifice be offered unto me; for I am a great King, saith the Lord Almighty, and my name is wonderful among the heathen." (Constitutions of the Holy Apostles, book 7)
350 AD Cyril of Jerusalem "Fall not away either into the sect of the Samaritans or into Judaism, for Jesus Christ has henceforth ransomed you. Stand aloof from all observance of Sabbaths and from calling any indifferent meats common or unclean" (Catechetical Lectures 4:37).
360 AD Council of Laodicea "Christians should not Judaize and should not be idle on the Sabbath, but should work on that day; they should, however, particularly reverence the Lord’s day and, if possible, not work on it, because they were Christians" (canon 29).
387 AD John Chrysostom "You have put on Christ, you have become a member of the Lord and been enrolled in the heavenly city, and you still grovel in the Law [of Moses]? How is it possible for you to obtain the kingdom? Listen to Paul's words, that the observance of the Law overthrows the gospel, and learn, if you will, how this comes to pass, and tremble, and shun this pitfall. Why do you keep the Sabbath and fast with the Jews?" (Homilies on Galatians 2:17).
387 AD John Chrysostom "The rite of circumcision was venerable in the Jews' account, forasmuch as the Law itself gave way thereto, and the Sabbath was less esteemed than circumcision. For that circumcision might be performed, the Sabbath was broken; but that the Sabbath might be kept, circumcision was never broken; and mark, I pray, the dispensation of God. This is found to be even more solemn that the Sabbath, as not being omitted at certain times. When then it is done away, much more is the Sabbath" (Homilies on Philippians 10).
412 AD Augustine "Well, now, I should like to be told what there is in these Ten Commandments, except the observance of the Sabbath, which ought not to be kept by a Christian . . . Which of these commandments would anyone say that the Christian ought not to keep? It is possible to contend that it is not the Law which was written on those two tables that the apostle [Paul] describes as 'the letter that kills' [2 Cor. 3:6], but the law of circumcision and the other sacred rites which are now abolished" (The Spirit and the Letter 24).
597 AD Gregory I "It has come to my ears that certain men of perverse spirit have sown among you some things that are wrong and opposed to the holy faith, so as to forbid any work being done on the Sabbath day. What else can I call these [men] but preachers of Antichrist, who when he comes will cause the Sabbath day as well as the Lord’s day to be kept free from all work. For because he [the Antichrist] pretends to die and rise again, he wishes the Lord’s day to be had in reverence; and because he compels the people to Judaize that he may bring back the outward rite of the Law, and subject the perfidy of the Jews to himself, he wishes the Sabbath to be observed. For this which is said by the prophet, 'You shall bring in no burden through your gates on the Sabbath day' (Jer. 17:24) could be held to as long as it was lawful for the Law to be observed according to the letter. But after that the grace of almighty God, our Lord Jesus Christ, has appeared, the commandments of the Law which were spoken figuratively cannot be kept according to the letter. For if anyone says that this about the Sabbath is to be kept, he must needs say that carnal sacrifices are to be offered. He must say too that the commandment about the circumcision of the body is still to be retained. But let him hear the apostle Paul saying in opposition to him: 'If you be circumcised, Christ will profit you nothing' (Gal. 5:2)" (Letters 13:1).
Eli Bendersky 23/08/2023 | Source: Eli Bendersky's website
Many years ago I've re-posted a Stack Overflow answer with Python code for a terse prime sieve function that generates a potentially infinite sequence of prime numbers ("potentially" because it will run out of memory eventually). Since then, I've used this code many times - mostly because it's short and clear. In this post I will explain how this code works, where it comes from (I didn't come up with it), and some potential optimizations. If you want a teaser, here it is:
def gen_primes():
"""Generate an infinite sequence of prime numbers."""
D = {}
q = 2
while True:
if q not in D:
D[q * q] = [q]
yield q
else:
for p in D[q]:
D.setdefault(p + q, []).append(p)
del D[q]
q += 1
To understand what this code does, we should first start with the basic Sieve of Eratosthenes; if you're familiar with it, feel free to skip this section.
The Sieve of Eratosthenes is a well-known algorithm from ancient Greek times for finding all the primes below a certain number reasonably efficiently using a tabular representation. This animation from Wikipedia explains it pretty well:
Starting with the first prime (2) it marks all its multiples until the requested limit. It then takes the next unmarked number, assumes it's a prime (because it is not a multiple of a smaller prime), and marks its multiples, and so on until all the multiples below the limit are marked. The remaining unmarked numbers are primes.
Here's a well-commented, basic Python implementation:
def gen_primes_upto(n):
"""Generates a sequence of primes < n.
Uses the full sieve of Eratosthenes with O(n) memory.
"""
if n == 2:
return
# Initialize table; True means "prime", initially assuming all numbers
# are prime.
table = [True] * n
sqrtn = int(math.ceil(math.sqrt(n)))
# Starting with 2, for each True (prime) number I in the table, mark all
# its multiples as composite (starting with I*I, since earlier multiples
# should have already been marked as multiples of smaller primes).
# At the end of this process, the remaining True items in the table are
# primes, and the False items are composites.
for i in range(2, sqrtn):
if table[i]:
for j in range(i * i, n, i):
table[j] = False
# Yield all the primes in the table.
yield 2
for i in range(3, n, 2):
if table[i]:
yield i
When we want a list of all the primes below some known limit, gen_primes_upto is great, and performs fairly well. There are two issues with it, though:
Back to the infinite prime generator that's in the focus of this post. Here is its code again, now with some comments:
def gen_primes():
"""Generate an infinite sequence of prime numbers."""
# Maps composites to primes witnessing their compositeness.
D = {}
# The running integer that's checked for primeness
q = 2
while True:
if q not in D:
# q is a new prime.
# Yield it and mark its first multiple that isn't
# already marked in previous iterations
D[q * q] = [q]
yield q
else:
# q is composite. D[q] holds some of the primes that
# divide it. Since we've reached q, we no longer
# need it in the map, but we'll mark the next
# multiples of its witnesses to prepare for larger
# numbers
for p in D[q]:
D.setdefault(p + q, []).append(p)
del D[q]
q += 1
The key to the algorithm is the map D. It holds all the primes encountered so far, but not as keys! Rather, they are stored as values, with the keys being the next composite number they divide. This lets the program avoid having to divide each number it encounters by all the primes known so far - it can simply look in the map. A number that's not in the map is a new prime, and the way the map updates is not unlike the sieve of Eratosthenes - when a composite is removed, we add the next composite multiple of the same prime(s). This is guaranteed to cover all the composite numbers, while prime numbers should never be keys in D.
I highly recommend instrumenting this function with some printouts and running through a sample invocation - it makes it easy to understand how the algorithm makes progress.
Compared to the full sieve gen_primes_upto, this function doesn't require us to know the limit ahead of time - it will keep producing prime numbers ad infinitum (but will run out of memory eventually). As for memory usage, the D map has all the primes in it somewhere, but each one appears only once. So its size is , where is the Prime-counting function, the number of primes smaller or equal to n. This can be approximated by [1].
I don't remember where I first saw this approach mentioned, but all the breadcrumbs lead to this ActiveState Recipe by David Eppstein from way back in 2002.
I really like gen_primes; it's short, easy to understand and gives me as many primes as I need without forcing me to know what limit to use, and its memory usage is much more reasonable than the full-blown sieve of Eratosthenes. It is, however, also quite slow, over 5x slower than gen_primes_upto.
The aforementioned ActiveState Recipe thread has several optimization ideas; here's a version that incorporates ideas from Alex Martelli, Tim Hochberg and Wolfgang Beneicke:
def gen_primes_opt():
yield 2
D = {}
for q in itertools.count(3, step=2):
p = D.pop(q, None)
if not p:
D[q * q] = q
yield q
else:
x = q + p + p # get odd multiples
while x in D:
x += p + p
D[x] = p
The optimizations are:
With these in place, the function is more than 3x faster than before, and is now only within 40% or so of gen_primes_upto, while remaining short and reasonably clear.
There are even fancier algorithms that use interesting mathematical tricks to do less work. Here's an approach by Will Ness and Tim Peters (yes, that Tim Peters) that's reportedly faster. It uses the wheels idea from this paper by Sorenson. Some additional details on this approach are available here. This algorithm is both faster and consumes less memory; on the other hand, it's no longer short and simple.
To be honest, it always feels a bit odd to me to painfully optimize Python code, when switching languages provides vastly bigger benefits. For example, I threw together the same algorithms using Go and its experimental iterator support; it's 3x faster than the Python version, with very little effort (even though the new Go iterators and yield functions are still in the proposal stage and aren't optimized). I can't try to rewrite it in C++ or Rust for now, due to the lack of generator support; the yield statement is what makes this code so nice and elegant, and alternative idioms are much less convenient.
The Wikipedia article on the sieve of Eratosthenes mentions a segmented approach, which is also described in the Sorenson paper in section 5.
The main insight is that we only need the primes up to to be able to sieve a table all the way to N. This results in a sieve that uses only memory. Here's a commented Python implementation:
def gen_primes_upto_segmented(n):
"""Generates a sequence of primes < n.
Uses the segmented sieve or Eratosthenes algorithm with O(√n) memory.
"""
# Simplify boundary cases by hard-coding some small primes.
if n < 11:
for p in [2, 3, 5, 7]:
if p < n:
yield p
return
# We break the range [0..n) into segments of size √n
segsize = int(math.ceil(math.sqrt(n)))
# Find the primes in the first segment by calling the basic sieve on that
# segment (its memory usage will be O(√n)). We'll use these primes to
# sieve all subsequent segments.
baseprimes = list(gen_primes_upto(segsize))
for bp in baseprimes:
yield bp
for segstart in range(segsize, n, segsize):
# Create a new table of size √n for each segment; the old table
# is thrown away, so the total memory use here is √n
# seg[i] represents the number segstart+i
seg = [True] * segsize
for bp in baseprimes:
# The first multiple of bp in this segment can be calculated using
# modulo.
first_multiple = (
segstart if segstart % bp == 0 else segstart + bp - segstart % bp
)
# Mark all multiples of bp in the segment as composite.
for q in range(first_multiple, segstart + segsize, bp):
seg[q % len(seg)] = False
# Sieving is done; yield all composites in the segment (iterating only
# over the odd ones).
start = 1 if segstart % 2 == 0 else 0
for i in range(start, len(seg), 2):
if seg[i]:
if segstart + i >= n:
break
yield segstart + i
The full code for this post - along with tests and benchmarks - is available on GitHub.
[1] | While this is a strong improvement over O(n) (e.g. for a billion primes, memory usage here is only 5% of the full sieve version), it still depends on the size of the input. In the unlikely event that you need to generate truly gigantic primes starting from 2, even the square-root-space solutions become infeasible. In this case, the whole approach should be changed; instead, one would just generate random huge numbers and use probabilistic primality testing to check for their primeness. This is what real libraries like Go's crypto/rand.Prime do. |
Luke Plant 22/08/2023 | Source: Luke Plant's home page
The reason that modern web development is swamped with complexity is that no one really wants things to be simple. We just think we do, while our choices prove otherwise.
A lot of developers want simplicity in the same way that a lot of clients claim they want a fast website. You respond “OK, so we can remove some of these 17 Javascript trackers and other bloat that’s making your website horribly slow?” – no, apparently those are all critical business functionality.
In other words, they prioritise everything over speed. And then they wonder why using their website is like rowing a boat through a lake of molasses on a cold day using nothing but a small plastic spoon.
The same is often true of complexity. The real test is the question “what are you willing to sacrifice to achieve simplicity?” If the answer is “nothing”, then you don’t actually love simplicity at all, it’s your lowest priority.
When I say “sacrifice”, I don’t mean that choosing simplicity will mean you are worse off overall – simplicity brings massive benefits. But it does mean that there will be some things that tempt you to believe you are missing out.
For every developer, it might be something different. For one, the tedium of having to spend half an hour a month ensuring that two different things are kept in sync easily justifies the adoption of a bulky framework that solves that particular problem. For another, the ability to control how a checkbox animates when you check it is of course a valid reason to add another 50 packages and 3 layers of frameworks to their product. For another, adding an abstraction with thousands of lines of codes, dozens of classes and page after page of documentation in order to avoid manually writing a tiny factory function for a test is a great trade-off.
Of course we all claim to hate complexity, but it’s actually just complexity added by other people that we hate — our own bugbears are always exempted, and for things we understand we quickly become unable to even see there is a potential problem for other people. Certainly there are frameworks and dependencies that justify their existence and adoption, but working out which ones they are is hard.
I think a good test of whether you truly love simplicity is whether you are able to remove things you have added, especially code you’ve written, even when it is still providing value, because you realise it is not providing enough value.
Another test is what you are tempted to do when a problem arises with some of the complexity you’ve added. Is your first instinct to add even more stuff to fix it, or is it to remove and live with the loss?
The only path I can see through all this is to cultivate an almost obsessive suspicion of FOMO. I think that’s probably key to learning to say no.