Noordstar Blog

Homepage and blog by Bram Noordstar

I write here because I like writing. Simple as that. Some of it is technical, some of it is just thoughts and little ideas. No tracking, no engagement farming or ads – just words on a page.

Things you might be looking for

  • Blog posts – Browse the latest or check out posts by topic
  • Code & Projects – My GitHub or Git server
  • Self-hosted services – If you're a friend or family member looking for something, you probably already know where to go. If not, ask me directly.
  • Contact – Reach out to me on Matrix

Who I am

I like open source, decentralized tech, and figuring things out for myself. I love D&D, public transport and Europe. I would've liked to use the term cyberpunk to describe my blog, hadn't it already been used to describe a dystopian hyper-capitalist setting.

Hashtags

Use any of these hashtags to find posts about that topic!

#bayesian #doomsdayargument #foss #functional #korea #languagedesign #maths #philosophy #politics #publictransport #regulation #segmentdisplay #traffic

I have created a 45-bit segment display for Hangul, the Korean script. See the live demo here.

Hangul segment display displaying the word 떡볶이

What is Hangul?

Recently, I have started to learn a bit of Korean. One of the most fascinating parts of the language is its script called 한글, or Hangul in its romanized form.

This language, like the Latin script, allows you to pronounce each word by reading the letters left to right. In Korean, however, each “block” represents a syllable block, which contains all the jamo (symbols) that are used in the syllable.

For example, the represents a p or b, the represents an a and the represents an n. So a character like creates the syllable ba, and the character creates the syllable na. So 바나나 represents ba-na-na, which is actually the word for banana in Korean!

As Professor Emeritus of Korean language and linguistics Jaehoon Yeon writes in their book Beginners' Korean (ISBN 978-1-399-82161-2), “Hangeul is one of the world's most scientific writing systems and has received worldwide acclaim from countless linguists. As a unique systematized phonetic script, Hangeul can express up to 10,000 sounds. It is perhaps the most outstanding scientific and cultural achievement of the Korean nation.” While I think it may be a stretched brag to call the syllables “sounds”, it is amazing how such a systematized script can display so many syllables as unique characters.

To practice and to get to know them, I have tried to learn the script in my own way!

What are segment displays?

A segment display uses segments instead of pixels to create text on screen. The most common example is the 7-bit segment display used to display numbers.

However, as the Korean Wikipedia page on segment displays reveals, such segment displays aren't very good at displaying Korean characters. For example, can you decipher what is written in this image?

7-bit display showing the same word as the picture at the start of this article

I've recently learned to read Korean, and decided to design a segment display that would properly show Hangul in an unambiguous and readable format. In my opinion, the result looks readable to a degree where it could pass as a peculiar font!

The challenge with segment displays is to use as few segments as possible to create a readable font. In my situation, I have created a 45-bit design, which means that the design uses 45 segments to create the font. In other words, all Hangul syllable blocks can be displayed on my design by activating a unique group of segments.

How the design was made

Creating the design consisted of three parts:

  1. Designing the individual segments
  2. Designing the font
  3. Building the demo

Desigining the individual segments

Most of my design has based itself around this public domain Hangul romanization guide. It contains a comprehensive explanation of all possible characters in the Hangul script:

Detailed guide in the public domain explaining Hangul

As you can see, every syllable block consists or 2 or 3 jamo, and they are rendered independently of one another – only the placement can change. This means we can design the three parts (initial, medial, final) independently and then arrange them in an appropriate fashion.

Initial part

To make the creation of a segment display easier, I've taken the freedom to divide the consonants into two groups:

  • The blocky jamo ㄱ, ㅋ, ㄴ, ㄷ, ㅌ, ㄹ, ㅁ, ㅂ, and ㅍ.
  • The curvy jamo ㅅ, ㅇ, ㅈ, ㅊ and ㅎ.

The blocky jamo can be rendered by lighting the edges of a square: ㄱ highlights the top and the right edge, ㄴ highlights the left and the bottom edge, and ㅁ highlights all four edges. To add support for all blocky jamo, the square can have a bar in the middle for the jamo ㅋ and ㄹ, and the display can have some dots near the corners for the jamo ㅂ and ㅍ.

Blocky jamo segment display design

The curvy jamo are a bit more complex. By adding the ㅅ inside the square shape, one can automatically create ㅅ, ㅈ and ㅊ – but the circles in ㅇ and ㅎ form a problem. By compromising and making the circle a triangle, however, one can render ㅇ, and ㅎ with the addition of a final dot in at the middle of the top edge.

Complete segment display design for all consonants

Some initial jamo are double – but these are only ㄱ, ㄷ, ㅂ, ㅅ, and ㅈ in their doubled form. By creating a second (but simplified) version of our initial segment display, we can create all possible initial characters.

Simplified segment display design for double consonants

Medial part

The medial part is always the only vowel in every syllable block. The available vowels are the following:

  • The horizontal jamo ㅗ, ㅛ, ㅜ, ㅠ, and ㅡ.
  • The vertical jamo ㅏ, ㅑ, ㅓ, ㅕ, ㅣ, ㅐ, ㅒ, ㅔ, and ㅖ.
  • The combined jamo ㅘ, ㅝ, ㅙ, ㅞ, ㅚ, ㅟ, and ㅢ.

When combining it with a consonant jamo such as ㅇ, horizontal jamo render like 으, vertical jamo render like 이, and combined jamo render like 의. As you can see, every jamo's direction, shape and placement is decided by the long lines. Given that most combinations can exist, we can build two horizontal lines, two vertical lines, and then have some dots on most sides. One detail is that the dots on the vertical bar in ㅝ and ㅞ are supposed to go below the horizontal bar, so that is a detail to keep in mind. This brings us to the following design:

Complete segment display design for all vowels

Final part

All consonant jamo appear in the initial part – so the design for the initial part is sufficient to render any jamo. However, the final part can consist of two jamo! More specficially, when rendering double jamo:

  • The first jamo can be ㄱ, ㄴ, ㄹ, and ㅂ.
  • The second jamo can be ㄱ, ㅅ, ㅈ, ㅎ, ㅂ, ㅌ, ㅁ and ㅍ.

So when creating two jamo segment displays, one should be able to render all consonant jamo, and the other should render a select group of jamo. The arrangement depends on how the syllable block is formed.

Designing the font

To design the font, we now have the knowledge of how we can render the initial, medial and final parts of the syllable block. The trick is to arrange the segments in a way that any rendered syllable block remains readable.

For the website's design, I have relied on a font from some ancient Korean sources, which would sometimes render the final part a bit to the side of the initial and medial parts. This would allow the syllable block be a bit oddly shaped but still retain consistency.

As such, whenever the final part renders only one jamo, it renders it in the second jamo, leaving the first jamo only to need to render ㄱ, ㄴ, ㄹ, ㅂ, and ㅅ when necessary.

Simplified segment display design for double consonant final parts

Building the demo

While the design of the segment display is complete with this design, building a demo also requires the system to translate Korean text to a series of bits that activates the segments appropriately. For this, the blog post How Korean input methods work by m10k has proven extremely useful. This helped me write a Hangul parser that identifies the individual jamo in order to render them on the webpage.

With the use of Elm, I then transformed the script into a webpage that renders any inserted Korean text into a proper segment display. The website dynamically decodes the user input and renders the text on screen.

Despite the visual quirks—like some jamo (e.g., ㅅ, ㅇ) not rendering perfectly, or syllables like 이 appearing disproportionately small next to something like 쀲—the display manages to stay surprisingly readable. These trade-offs are part of keeping the design minimalistic. With more bits (segments), one could definitely improve clarity and consistency—especially in character size. I'd like to design a more advanced segment display based on the segment displays on the Berlin U-Bahn and the train stations in Brussels, which balance clarity and space beautifully.

Brussels train station segment display

Berlin U-bahn segment display

Why doesn't this already exist?

Before designing my own segment display, I looked around the internet and couldn't find an existing segment display! I would love to hear from Koreans whether they ever encounter segment displays in their daily lives, or that they're rather uncommon.

My current theory is that by the time computers were sufficiently advanced to manipulate 45-bit segment displays, LCD-screens were already a thing and there was no longer a need for a segment display. Nevertheless, it surprises me that I cannot find an existing design.

Regardless, this is how my minimalist-but-readable design ended up. By just using 45 bits, this segment display can render every valid Hangul syllable block. It started as a way to learn the script more deeply, but it turned into a functional segment display that I hope to inspire some Korean tinkerers with.

You can try the live demo here. Feel free to mess with the design and make it your own. I won't be extending this design to Hanja (good luck cramming that into 45 bits!) but I'd love to keep in touch with anyone who builds onto this design.

Have fun.

#korea #segmentdisplay


Older English post <<<

Older post <<<

The Carter-Leslie Doomsday Argument is a probabilistic argument that claims the expected lifetime of humanity is proportional to the number of humans that have existed so far. More specifically, the chance of being part of the first 1% of humans is only 1% – so there might be a 99% chance of humans going extinct before reaching 100 times the number of humans that have existed so far.

This article contains a simulation that provides a counter-argument to the Doomsday Argument by arguing that one's position in the list of existing humans is independent of the total number of people. I will soon write a second article in which I go a bit deeper into the philosophical theory that constructs this bizarre problem.

Bayesian lottery

The Doomsday Argument was first introduced to me by a similar question: suppose you're participating in a lottery. You don't know how many others participate, but you are told that there's a 50% chance that 10 people participate, and there's a 50% chance that 1 trillion people participate. If you draw lottery number 5, what does that tell you about how many people participate?

Both one's intuition and Bayesian statistics tell you that the likelihood heavily favours the option where only 10 people participate, as drawing a ticket under 11 is a marginal chance in a trillion ticket lottery. With the Doomsday Argument, however, this logic can draw an emotional reaction from people: people don't want this logic to make sense, as the Doomsday Argument sounds rather chilling: with the recent population boom we've made, it suggests that the apocalypse is very likely imminent in a few generations.

At first, I wasn't too interested in the problem: I dismissed it as 'Bayesian trickery' – my own term for abusing Bayes' theorem to support wild claims. For example, people have used it to argue that we live in a simulation, that we must build a murderous AI superintelligence to avoid extinction and that God exists because miracles occur. Such arguments have brought me a certain level of skepticism for outlandish claims relying on Bayesian statistics. My counter-argument might be flawed or be utterly nonsensical, but my gut feeling remains that the Doomsday Argument is a well-crafted example of Bayesian trickery that has remained untamed for several decades now.

Simulations

Personally, I am a fan of using numeric simulations to make converging approximations of probabilities, as they help gain insight into how chances work.

For example, to simulate the lottery problem, the following script can offer some insight:

import random

N = 1_000_000
a, b = 0

# Sample N times
for _ in range(N):
    if random.random() < 0.5:
        # The lottery has 10 participants
        a += 1 if random.randint(1, 10) == 5 else 0
    else:
        # The lottery has 1 trillion participants
        b += 1 if random.randint(1, int(1e12)) == 5 else 0

# When drawing 5, this number represents the fraction of times the lottery had 10 participants.
# This number should be near 1.
print(a / (a + b))

We can even calculate that this fraction should converge to $\frac{\frac{1}{10}}{\frac{1}{10} + \frac{1}{10^{12}}} \approx 1 – 10^{-11}$.

Nevertheless, with a similar simulation, I will demonstrate how this analogy does not go for the Doomsday Argument.

Bug world

Enter Bug World, a hypothetical planet in our galaxy that hosts a wide variety of bugs that have a peculiar species lifetime. We'll test whether being “early” in humanity means doom is near, and we'll do that by having only some bugs thrive on Bug World. This lets us ask: do early bugs have reason to believe their species will end soon? Let's simulate!

On Bug World, every species is destined to go extinct after 10 bugs or after 1000 bugs. (We could've said 1 trillion, but that's too intense to simulate on contemporary hardware.) All bugs reproduce asexually and immediately die when they give birth, so there's always exactly one bug of every species alive. The 10th bug has a 50 percent chance of dying without reproducing the 11th bug. If the 11th bug is born, however, the species is guaranteed to go extinct after 1000 bugs have been born.

Since we're going to run this simulation on many species, we'll call a species a mayfly type if it goes extinct after the 10th bug, and we'll call it a millipede if the species survives until its 1000th bug. We'll call a bug young if it is one of the first 10 bugs in its species, and we'll call a millipede old if it is the 11th or latter specimen.

I have started a simulation. About every tick or so, a new bug species emerges in the world. At first, you notice a linear growth of new bugs, but the Bug World population stabilizes after the first millipedes start to go extinct. From there, the simulation mostly has a uniform distribution of all the bugs and their potential lifetimes.

Simulation of the bug world population

The graph shows in blue how many bugs are alive at a time step. The green line below shows how many of those bugs are still young. For those bugs, it isn't determined yet whether they are mayflies or millipedes.

Let's sample some bugs! Bug World simulates species that may end early (like humanity might). Sampling bugs is like asking: “Am I, a humble bug, early in my species’ timeline?” If the answer is yes, we'll move forward in time to see when the species goes extinct.

When sampling bugs from Bug World, I got the following results:

We sampled 10000 bugs, of which 208 (~2%) were young bugs.

Old millipede: 9792
Old mayfly: 0
Young species ended up being a millipede: 107
Young species ended up being a mayfly: 101
======================================
Odds of a millipede species dying old: 100%
Odds of selecting a young bug when sampling a millipede: ~1%
Odds of selecting a millipede when sampling a young bug: ~51%

These results indicate an interesting revelation! We sampled 10K bugs, and we can translate the Doomsday Argument to this situation: instead of wondering our own human position, let's look at the position of all young bugs. When sampling a young bug, there's a 50% chance of selecting a millipede or mayfly – despite the odds of selecting a young millipede being minimal! This means that a young bug cannot conclude it is likely a mayfly because it is a young bug.

Why is this result different from the simplified lottery example? I believe that the flaw in the Doomsday Argument is that it believes that you, the observer, are guaranteed to exist in any case of the Doomsday Argument. However, for the bugs on Bug World, 99% of the millipedes wouldn't exist if their species had been a mayfly.

Fire Lottery

Let's reshape the lottery thought experiment in a way where it demonstrates how I think the Doomsday Argument should be imagined. For this new lottery, there's 1 trillion people willing to participate. However, there has been a fire at the lottery factory, so not every participant receives a lottery ticket! You do, however, and you receive ticket number 5.

To me, the lottery represents being born, and the ticket represents your number in human history. The unique part about the fire lottery, however, is that you do not know the total number of lottery tickets remaining.

You can try reasoning that it's unlikely that you ended up in the bottom 1% of tickets and therefore there's a 99% chance of at most 500 tickets having been printed, but that clearly doesn't work. It's easy to see that you're overlooking the massive chance that you wouldn't have received a lottery ticket anyway.

I believe that these lottery tickets represent human existence in our world. This explains why the odds don't shift for young bugs in Bug World, and it shows the independence between the total number of humans to exist and the number of humans that have existed so far.

Conclusion

The Doomsday Argument assumes that your position among humans is informative about how many humans there will be. But both Bug World and the Fire Lottery suggest otherwise: if your existence depends on many people existing in the first place, then being “early” says little about the total number. In other words, your ticket number only matters if you were guaranteed a ticket in the first place.

The concept of Bug World touches on underlying concepts that I will touch on in a future article. I'll post a link here when that topic is there, or you can click on any of the hashtags to view all posts on a given topic.

#bayesian #doomsdayargument #maths #philosophy


Older English post <<< >>> Newer English post

Older post <<< >>> Newer post

As can be read in my first blog post, I intend to build an Elm-like language with some unique design choices – and the community has taught me some valuable things!

TLDR: My design seems akin to cutting-edge programming languages. I have a functional proof-of-concept! I'm not content with the syntax yet.

My idea isn't original – but it seems new

One of the most intriguing parallels to selective mutability I've discovered is the “resurrection hypothesis” mentioned in a 2019 paper called Counting Immutable Beans. The resurrection hypothesis mentions the idea that many objects die just before the creation of an object of the same kind.

The map function is a great example to this:

type Tree a = Leaf a | Node (Tree a) (Tree a)

map : (a -> b) -> Tree a -> Tree b
map f tree =
    case tree of
        Leaf x ->
            Leaf (f x)

        Node tree1 tree2 ->
           Node (map f tree1) (map f tree2)

If the language is purely immutable, then you might use twice the memory necessary: you would traverse through the tree, build a new tree with the identical structure, and then discard the old one. But imagine that the update function of our MUV-system looks like this:

type alias Model = { name : String, tree : Tree Int }

update : Int -> Model -> Model
update n model =
    { model | tree = map (\x -> x * 2 } model.tree }

Most immutable languages would duplicate the tree and discard the old one, effectively doubling memory usage. But with selective mutability, the tree could be updated in-place.

There's also other programming languages developing similar ideas. Research language Koka uses Perceus, an algorithm that offers reference counting in a way that avoids a garbage collector. Similarly, Neut does what they call static memory management, where they find malloc/free pairs of matching sizes to optimize around the resurrection hypothesis.

A proof-of-concept works – for now

As an experiment, I have built a proof-of-concept transpiled version. A major challenge with catching memory problems is that most OS systems aren't built for catching memory issues. From my understanding, C, LLVM and Rust typically rely on the OS to manage the stack & heap. If there's an overflow, or some other problem, the OS terminates the program, reporting a segmentation fault. Not very helpful!

As a result, I have designed my own stack/heap system in a C program. Similar to a VM, the code runs in a single block of memory that's assigned to the program on startup. It functions reliably, regardless of available memory size.

For now, this snippet represents the decompiled version of the file low.c:

main =
    h "Kaokkokos shall prevail by the hands of Alkbaard!"

h : String -> String
h x = f ( g x )

f : String -> String
f = String.upper

g : String -> String
g = Console.print -- currently an identity function with side-effects

This proof-of-concept shows that a memory-aware runtime is feasible, though Mem.withDefault handling is still pending implementation.

Memory-aware language design might need some changes

There's two major challenges with a design using => operations. It's a difficult concept to understand, and it limits the capability for the language to compile to environments where memory cannot managed.

As a result, it might be rewarding to design the types in a Mem module that is only usable in environments where memory CAN be managed. This has several downsides to be considered, but the confusion of the => operation might not necessarily be a better directly. For example, consider the following two functions:

-- Example 1
foo1 : Foo => Foo -> Foo
foo1 x =
    (\y -> bar x y )
        |> Mem.withDefault identity

-- Example 2
foo2 : Foo -> Foo => Foo
foo2 x y =
    bar x y
        |> Mem.withDefault defaultFoo

Both functions are essentially different, offering different guarantees in different scenarios. While foo1 is guaranteeing to return a Foo type when two Foo types have been inserted, foo2 guarantees to return a Foo -> Foo function after one Foo type has been inserted.

This guarantee sounds relatively reasonable when you look at the code, but how much effort would it cost to write a fully memory aware function?

foo : Foo => Foo => Foo
foo x =
    (\y -> bar x y |> Mem.withDefault defaultFoo )
        |> Mem.withDefault identity

This is a rather unappealing way to write code, and quite difficult to read. This might need some reworking.

Conclusion

I have learned a few more concepts and the development seems to go rather well! I am encountering less hurdles than expected, and the design seems manageable.

As with the previous post, I am very much open to ideas. Let me know if you have any thoughts to share!

#foss #functional #languagedesign


Older English post <<< >>> Newer English post

Older post <<< >>> Newer post

Vanmiddag op 19 februari 2025 rond 16:00 stond een grote groep trams vast in Amsterdam op het Leidseplein. Ze leken in de knoop te zitten.

Ik trof de knoop aan om 15:56 en heb een beetje rondgelopen om de situatie te bekijken. Op basis van mijn beeldopnames leek de knoop er als volgt uit te zien.

Tekening van trams die vast staan op Leidseplein, getekend op OpenRailwayMap

De voorste blauwe tram was lijn 5 richting Amstelveen Stadshart. Die moest linksaf, maar die kon dat net niet omdat er een lijn 17 in de weg zat. Die tram kon echter niet meer naar voren rijden, omdat vooraan een lijn 2 richting Amsterdam Centraal zat te wachten op de lijn 19 die op de baan wilde oversteken.

Foto van de tram die niet naar rechts kan omdat een andere tram net niet ver genoeg naar voren kan

Om 16:02 was de opstopping opgelost. Ik stond aan de zuidkant van het Leidseplein en zag daardoor niet hoe het was opgelost – maar mijn vermoeden is dat de tram in de bocht een klein stukje naar achteren heeft kunnen rijden om zo een aantal trams naar voren te laten.


Ik ben erg benieuwd of dit een probleem is wat vaker voorkomt! Het Leidseplein is de afgelopen paar jaar steeds drukker geworden, dus ik hoor graag of dit een probleem is wat veel voorkomt op het Leidseplein. Ik heb het in ieder geval niet eerder gezien.

#publictransport #traffic


Vooralsnog is dit het enige Nederlandstalige bericht!

Ouder bericht <<< >>> Nieuwer bericht

Introduction

As a programmer who has experienced the elegance of writing Elm, I’ve often wished for a language that extends Elm’s core philosophy beyond the browser. While many programming languages emphasize type safety, immutability, and purity, few address memory safety as a core language feature.

What if we designed a programming language where memory failures never crash a program? Where aggressive dead code elimination produces highly optimized output? And where every function is guaranteed to be pure and immutable?

This article outlines a conceptual framework for such a language—its principles, challenges, and potential optimizations.


Core Principles

1. Functional, Pure & Immutable

Everything in the language is a function. Functions are pure, meaning they always return the same output for the same input, and immutability is enforced throughout. Even variables are just functions with zero arguments.

This ensures strong guarantees for compiler optimization and program correctness.

2. Side-Effects Managed by the Runtime

Like Elm, side-effects cannot be executed directly by user code. Instead, side-effects must be passed to the runtime for execution. This delegates responsibility to the runtime designers and allows the compiler to assume that all side-effects are managed safely.

3. Memory Safety as a Core Language Feature

This language ensures programs never crash due to memory exhaustion. A special memory-safe module (Mem) allows functions to specify default return values in case of memory failure:

add : Int -> Int => Int
add x y =
    x + y
        |> Mem.withDefault 0

Mechanism

  • The => syntax signals a memory-safe function.
  • Mem.withDefault 0 ensures a fallback return value in case of failure.
  • Default values are allocated at startup to prevent mid-execution failures.

By guaranteeing upfront memory allocation, runtime failures are prevented once the runtime passes the initial startup phase.


Handling Dynamic Data Structures

Since the language enforces immutability, dynamically sized data structures must be created at runtime. If memory limits are reached, functions must define fallback strategies:

  • Return the original input if allocation fails.
  • Return an default value specified by the developer.

Ideally, memory exhaustion can be explicitly handled with a dedicated return type:

type Answer = Number Int | OutOfMemory

fib : Int => Answer
fib n =
    case n of
        0 -> Number 1
        1 -> Number 1
        _ ->
            case (fib (n - 1), fib (n - 2)) of
                (Number a, Number b) -> Number (a + b)
                _ -> OutOfMemory
    |> Mem.withDefault OutOfMemory

Extreme Dead Code Elimination

The compiler aggressively removes unused computations, reducing program size. Consider:

type alias Message =
    { happy : String
    , angry : String
    , sad : String
    , mood : Bool
    }

toText : Message -> String
toText msg =
    if msg.mood then msg.happy else msg.angry

main =
    { happy = "I am happy today."
    , angry = "I am extremely mad!"
    , sad = "I am kinda sad..."
    , mood = True
    }
    |> toText
    |> Mem.withDefault "Ran out of memory"
    |> Console.print

Optimization Process

  1. Since mood is always True, the else branch is never used.
  2. The function simplifies to toText msg = msg.happy.
  3. The .angry, .sad, and .mood fields are removed.
  4. Message reduces to type alias Message = String.
  5. The toText function is removed as a redundant identity function.

Final optimized output:

main = Console.print "I am happy today!"

While this may require too many computations at compile-time, all of these optimizations seem fair assessments to make.


Compiler-Assisted Mutability for Performance

While immutability is enforced, the compiler introduces selective mutability when safe. If an old value is provably unused, it can be mutated in place to reduce memory allocations.

Example:

type alias Model = { name : String, age : Int }

capitalizeName : Model -> Model
capitalizeName model =
    { model | name = String.capitalize model.name }

Normally, this creates a new string and record. However, if the previous model.name isn't referenced anywhere else, the compiler mutates the name field in place, optimizing memory usage.


Compiler & Debugging Considerations

For effective optimizations, the compiler tracks:

  • Global variable usage to detect always-true conditions.
  • Usage patterns (e.g., optimizing predictable structures like Message).
  • External data sources, which are excluded from optimizations.

To aid debugging, the compiler could provide:

  • Graph-based visualization of variable flow.
  • Debugging toggles to disable optimizations selectively.

Conclusion: A New Paradigm for Functional Memory Safety?

Most languages handle memory safety through garbage collection (Java, Python), manual management (C, C++), or borrow-checking (Rust). This language proposes a fourth approach:

Memory-aware functional programming

By making memory failures a core language feature with predictable handling, functional programming can become more robust.

Would this approach be practical? The next step is to prototype a minimal interpreter to explore these ideas further.

If you're interested in language design, memory safety, and functional programming, I’d love to hear your thoughts!

#foss #functional #languagedesign


Older English post <<< >>> Newer English post

Older post <<< >>> Newer post

As a resident in the Netherlands, I take part in traffic by bike on a daily basis. One of my major problems is that communication between cars and bicycles can be difficult.

I believe that regulating cars to have brake lights on the front might help communication in traffic and make traffic safer.

While cars have turn signals and headlights, these don’t clearly indicate when a driver is slowing down, especially for pedestrians and cyclists. This can lead to hesitation, miscommunication, and even accidents.

The agony of pedestrians that cross at the worst time

Imagine you’re a pedestrian or cyclist approaching a crossing. A car is coming fast—do you cross?

Maybe it’s slowing down, but its blinkers suggest a turn. Is it stopping for you or just taking the turn carefully? Maybe the driver noticed you and is coasting, but they’re still moving. Do you cross?

By the time the car finally stops, you realize this whole dance wasted time for both of you.

There's comedy sketches ridiculing this situation on the internet, and it's an annoying experience for all drivers, cyclists and pedestrians involved.


Cars among one another don't have this problem. They can't see other drivers' body language so they'll have to trust that everyone follows priority rules correctly.

Cyclists don’t have this issue either. They notoriously ignore traffic rules, but at least they can read each other’s body language. Priority usually goes to the one who pretends hardest they don’t see the other person.

Pedestrians still bump into one another, but this is rarely deadly.

It's the unique combination of car drivers, whose body language is hard to read, and bikers that ignore traffic rules. As a result, neither really knows what the other is up to.

Frontal brake lights

Brake lights clearly signal to drivers behind, 'Look, I’m decelerating,' and they work. Even the 3rd brake light on the back has demonstrated to improve safety on the road, and I believe that frontal brake lights might do the same.

From my understanding, the United Nations Economic Commission for Europe (UNECE) seems to regulate vehicles in the European Union. As an individual, I cannot simply join one of their meetings and ask “what about frontal brake lights?” But member state representatives can.

If you agree this could improve road safety, consider raising the idea with your local representatives. I'll be reaching out to mine to see if this can gain traction at the UNECE level. If you're more knowledgeable on the topic, feel free to reach out to me on Matrix or get in touch through the Fediverse. I'd like to hear whether this is a good idea before I'm trying to pursue it politically.

Until then, whenever you find yourself hesitating at a crossing, ask yourself—would frontal brake lights have made this easier? If so, let’s make it happen.

#politics #regulation #frontalbrakelights #traffic


>>> Newer English post

>>> Newer post