Skip to content

Neil Madden

    • About Neil

  • Mythos and its impact on security

    I’m sure by now you’ve all read the news about Anthropic’s new “Mythos” model and its apparently “dangerous” capabilities in finding security vulnerabilities. I’m sure everyone reading this also has opinions about that. Well, here are a few of mine.

    (more…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    14 April, 2026
    AI, capability-security, LLMs, memory-safety, programming, Security

  • Maybe version ranges are a good idea after all?

    One of the most important lessons I’ve learned in security, is that it’s always better to push security problems back to the source as much as possible. For example, a small number of experts (hopefully) make cryptography libraries, so it’s generally better if they put in checks to prevent things like invalid curve attacks rather than leaving that up to applications, so that we don’t get the same vulnerabilities cropping up again and again. It’s much more efficient to fix the problem at source rather than having everyone re-implement the same redundant checks everywhere.

    Now consider how we currently manage security vulnerabilities in third-party software dependencies. Current accepted wisdom is to lock dependencies to a single specific version, often with a cryptographic hash to ensure you get exactly that version. This is great for reproducibility, and everyone loves reproducibility. However, when there’s a security vulnerability in that dependency, every single consumer of that library has to manually update to the next version, and then their consumers have to update, and so on. The fix is done at source, but the responsibility for updating cascades through the entire ecosystem. This is not efficient. Two years after log4shell, around 25% of vulnerable consumers had apparently still not updated.

    (more…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    19 March, 2026
    Dependencies, programming, Security, Supply Chain

  • Why I don’t use LLMs for programming

    I originally posted this on Mastodon, but I thought I’d add it here too:

    “What I mean is that if you really want to understand something, the best way is to try and explain it to someone else. That forces you to sort it out in your own mind. And the more slow and dim-witted your pupil, the more you have to break things down into more and more simple ideas. And that’s really the essence of programming. By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself. The teacher usually learns more than the pupil. Isn’t that true?” — Douglas Adams

    “It is not knowledge, but the act of learning, not possession, but the act of getting there which generates the greatest satisfaction.” — Carl Friedrich Gauss

    “You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program.” — Alan Perlis (of course)

    (Ironically, WordPress is now offering to “improve” these quotes with AI…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    2 March, 2026
    AI, artificial intelligence, LLMs

  • Looking for vulnerabilities is the last thing I do

    There’s a common misconception among developers that my job, as a (application) Security Engineer, is to just search for security bugs in their code. They may well have seen junior security engineers doing this kind of thing. But, although this can be useful (and is part of the job), it’s not what I focus on and it can be counterproductive. Let me explain.

    (more…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    20 February, 2026
    Application Security, Security, Security Engineering

  • Were URLs a bad idea?

    When I was writing Rating 26 years of Java changes, I started reflecting on the new HttpClient library in Java 11. The old way of fetching a URL was to use URL.openConnection(). This was intended to be a generic mechanism for retrieving the contents of any URL: files, web resources, FTP servers, etc. It was a pluggable mechanism that could, in theory, support any type of URL at all. This was the sort of thing that was considered a good idea back in the 90s/00s, but has a bunch of downsides:

    • Fetching different types of URLs can have wildly different security and performance implications, and wildly different failure cases. Do I really want to accept a mailto: URL or a javascript: “URL”? No, never.
    • The API was forced to be lowest-common-denominator, so if you wanted to set options that are specific to a particular protocol then you had to cast the return URLConnection to a more specific sub-class (and therefore lose generality).

    The new HttpClient in Java 11 is much better at doing HTTP, but it’s also specific to HTTP/HTTPS. And that seems like a good thing?

    In fact, in the vast majority of cases the uniformity of URLs is no longer a desirable aspect. Most apps and libraries are specialised to handle essentially a single type of URL, and are better off because of it. Are there still cases where it is genuinely useful to be able to accept a URL of any (or nearly any) scheme?

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    12 November, 2025
    URLs, Web

  • Monotonic Collections: a middle ground between immutable and fully mutable

    This post covers several topics around collections (sets, lists, maps/dictionaries, queues, etc) that I’d like to see someone explore more fully. To my knowledge, there are many alternative collection libraries for Java and for many other languages, but I’m not aware of any that provide support for monotonic collections. What is a monotonic collection, I hear you ask? Well, I’m about to answer that. Jesus, give me a moment.

    It’s become popular, in the JVM ecosystem at least, for collections libraries to provide parallel class hierarchies for mutable and immutable collections: Set vs MutableSet, List vs MutableList, etc. I think this probably originated with Scala, and has been copied by Kotlin, and various alternative collection libraries, e.g. Eclipse Collections, Guava, etc. There are plenty of articles out there on the benefits and drawbacks of each type. But the gulf between fully immutable and fully mutable objects is enormous: they are polar opposites, with wildly different properties, performance profiles, and gotchas. I’m interested in exploring the space between these two extremes. (Actually, I’m interested in someone else exploring it, hence this post). One such point is the idea of monotonic collections, and I’ll now explain what that means.

    By monotonic I mean here logical monotonicity: the idea that any information that is entailed by some set of logical formulas is also entailed by any superset of those formulas. For a collection data structure, I would formulate that as follows:

    If any (non-negated) predicate is true of the collection at time t, then it is also true of the collection at any time t’ > t.

    For example, if c is a collection and c.contains(x) returns true at some point in time, then it must always return true from then onwards.

    To make this concrete, a MonotonicList (say) would have an append operation, but not insert, delete, or replace operations. More subtly, monotonic collections cannot have any aggregate operations: i.e., operations that report statistics/summary information on the collection as a whole. For example, you cannot have a size method, as the size will change as new items are added (and thus the predicate c.size() == n can become false). You can have (as I understand it) map and filter operations, but not a reduce/fold.

    So why are monotonic collections an important category to look at? Firstly, monotonic collections can have some of the same benefits as immutable data structures, such as simplified concurrency. Secondly, monotonic collections are interesting because they can be (relatively) easily made distributed, per the CALM principle: Consistency as Logical Monotonicity (insecure link, sorry). This says that monotonic collections are strongly eventually consistent without any need for coordination protocols. Providing such collections would thus somewhat simplify making distributed systems.

    Class hierarchies and mutability

    Interestingly, Kotlin decided to make their mutable collection classes sub-types of the immutable ones: MutableList is a sub-type of List, etc. (They also decided to make the arrows go the other way from normal in their inheritance diagram, crazy kids). This makes sense in one way: mutable structures offer more operations than immutable ones. But it seems backwards from my point of view: it says that all mutable collections are immutable, which is logically false. (But then they don’t include the word Immutable in the super types). It also means that consumers of a List can’t actually assume it is immutable: it may change underneath them. Guava seems to make the opposite decision: ImmutableList extends the built-in (mutable) List type, probably for convenience. Both options seem to have drawbacks.

    I think the way to resolve this is to entirely separate the read-only view of a collection from the means to update it. On the view-side, we would have a class hierarchy consisting of ImmutableList, which inherits from MonotonicList, which inherits from the general List. On the mutation side, we’d have a ListAppender and ListUpdater classes, where the latter extends the former. Creating a mutable or monotonic list would return a pair of the read-only list view, and the mutator object, something like the following (pseudocode):

    ImmutableList<T> list = ImmutableList.of(....); // normal
    Pair<MonotonicList<T>, ListAppender<T>> mono = MonotonicList.of(...);
    Pair<List<T>, ListUpdater<T>> mut = List.of(...);
    

    The type hierarchies would look something like the following:

    interface List<E> {
        void forEach(Consumer<E> action);
        ImmutableList<E> snapshot();
    }
    
    interface MonotonicList<E> extends List<E> {
        boolean contains(E element);
        // Positive version of isEmpty():
        boolean containsAnything(); 
        <T> MonotonicList<T> map(Function<E, T> f);
        MonotonicList<E> filter(Predicate<E> p);
    }
    
    interface ImmutableList<E> extends MonotonicList<E> {
        int size();
        <T> T reduce(BiFunction<E, T, T> f, T initial);
    }
    
    interface ListAppender<E> {
        void append(E element);
    }
    
    interface ListUpdater<E> extends ListAppender<E> {
        E remove(int index);
        E replace(int index, E newValue);
        void insert(int index, E newValue);
    }
    

    This seems to satisfy allowing the natural sub-type relationships between types on both sides of the divide. It’s a sort of CQRS at the level of data structures, but it seems to solve the issue that the inheritance direction for read-only consumers is the inverse of the natural hierarchy for mutating producers. (This has a relationship to covariant/contravariant subtypes, but I’m buggered if I’m looking that stuff up again on my free time).

    Anyway, these thoughts are obviously pretty rough, but maybe some inklings of ideas if anyone is looking for an interesting project to work on.

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    11 November, 2025
    data structures, Functional Programming, immutability, Java, programming

  • Fluent Visitors: revisiting a classic design pattern

    It’s been a while since I’ve written a pure programming post. I was recently implementing a specialist collection class that contained items of a number of different types. I needed to be able to iterate over the collection performing different actions depending on the specific type. There are lots of different ways to do this, depending on the school of programming you prefer. In this article, I’m going to take a look at a classic “Gang of Four” design pattern: The Visitor Pattern. I’ll describe how it works, provide some modern spins on it, and compare it to other ways of implementing the same functionality. Hopefully even the most die-hard anti-OO/patterns reader will come away thinking that there’s something worth knowing here after all.

    (Design Patterns? In this economy?)

    (more…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    4 November, 2025
    Design Patterns, Functional Programming, Java, programming, Visitor Pattern

  • Rating 26 years of Java changes

    I first started programming Java at IBM back in 1999 as a Pre-University Employee. If I remember correctly, we had Java 1.1.8 installed at that time, but were moving to Java 1.2 (“Java 2”), which was a massive release—I remember engineers at the time grumbling that the ever-present “Java in a Nutshell” book had grown to over 600 pages. I thought I’d take a look back at 26 years of Java releases and rate some of the language and core library changes (Java SE only) that have occurred over this time. It’s a very different language to what I started out with!

    I can’t possibly cover every feature of those releases, as there are just way too many. So I’m just going to cherry-pick some that seemed significant at the time, or have been in retrospect. I’m not going to cover UI- or graphics-related stuff (Swing, Java2D etc), or VM/GC improvements. Just language changes and core libraries. And obviously this is highly subjective. Feel free to put your own opinions in the comments! The descriptions are brief and not intended as an introduction to the features in question: see the links from the Wikipedia page for more background.

    NB: later features are listed from when they were first introduced as a preview.

    Java 2 – 1998

    The Collections Framework: before the collections framework, there was just raw arrays, Vector, and Hashtable. It gets the job done, but I don’t think anyone thinks the Java collections framework is particularly well designed. One of the biggest issues was a failure to distinguish between mutable and immutable collections, strange inconsistencies like why Iterator as a remove() method (but not, say, update or insert), and so on. Various improvements have been made over the years, and I do still use it in preference to pulling in a better alternative library, so it has shown the test of time in that respect. 4/10

    Java 1.4 – 2002

    The assert keyword: I remember being somewhat outraged at the time that they could introduce a new keyword! I’m personally quite fond of asserts as an easy way to check invariants without having to do complex refactoring to make things unit-testable, but that is not a popular approach. I can’t remember the last time I saw an assert in any production Java code. 3/10

    Regular expressions: Did I really have to wait 3 years to use regex in Java? I don’t remember ever having any issues with the implementation they finally went for. The Matcher class is perhaps a little clunky, but gets the job done. Good, solid, essential functionality. 9/10

    “New” I/O (NIO): Provided non-blocking I/O for the first time, but really just a horrible API (still inexplicably using 32-bit signed integers for file sizes, limiting files to 2GB, confusing interface). I still basically never use these interfaces except when I really need to. I learnt Tcl/Tk at the same time that I learnt Java, and Java’s I/O always just seemed extraordinarily baroque for no good reason. Has barely improved in 2 and a half decades. 0/10

    Also notable in this release was the new crypto APIs: the Java Cryptography Extensions (JCE) added encryption and MAC support to the existing signatures and hashes, and we got JSSE for SSL. Useful functionality, dreadful error-prone APIs. 1/10

    Java 5 – 2004

    Absolutely loads of changes in this release. This feels like the start of modern Java to me.

    Generics: as Go discovered on its attempt to speed-run Java’s mistakes all over again, if you don’t add generics from the start then you’ll have to retrofit them later, badly. I wouldn’t want to live without them, and the rapid and universal adoption of them shows what a success they’ve been. They certainly have complicated the language, and there are plenty of rough edges (type erasure, reflection, etc), but God I wouldn’t want to live without them. 8/10.

    Annotations: sometimes useful, sometimes overused. I know I’ve been guilty of abusing them in the past. At the time it felt like they were ushering a new age of custom static analysis, but that doesn’t really seem to be used much. Mostly just used to mark things as deprecated or when overriding a method. Meh. 5/10

    Autoboxing: there was a time when, if you wanted to store an integer in a collection, you had to manually convert to and from the primitive int type and the Integer “boxed” class. Such conversion code was everywhere. Java 5 got rid of that, by getting the compiler to insert those conversions for you. Brevity, but no less inefficient. 7/10

    Enums: I’d learned Haskell by this point, so I couldn’t see the point of introducing enums without going the whole hog and doing algebraic datatypes and pattern-matching. (Especially as Scala launched about this time). Decent feature, and a good implementation, but underwhelming. 6/10

    Vararg methods: these have done quite a lot to reduce verbosity across the standard library. A nice small improvement that’s had a good quality of life enhancement. I still never really know when to put @SafeVarargs annotations on things though. 8/10

    The for-each loop: cracking, use it all the time. Still not a patch on Tcl’s foreach (which can loop over multiple collections at once), but still very good. Could be improved and has been somewhat replaced by Streams. 8/10

    Static imports: Again, a good simple change. I probably would have avoided adding * imports for statics, but it’s quite nice for DSLs. 8/10

    Doug Lea’s java.util.concurrent etc: these felt really well designed. So well designed that everyone started using them in preference to the core collection classes, and they ended up back-porting a lot of the methods. 10/10

    Java 7 – 2011

    After the big bang of Java 5, Java 6 was mostly performance and VM improvements, I believe, so we had to wait until 2011 for more new language features.

    Strings in switch: seems like a code smell to me. Never use this, and never see it used. 1/10

    Try-with-resources: made a huge difference in exception safety. Combined with the improvements in exception chaining (so root cause exceptions are not lost), this was a massive win. Still use it everywhere. 10/10

    Diamond operator for type parameter inference: a good minor syntactic improvement to cut down the visual noise. 6/10

    Binary literals and underscores in literals: again, minor syntactic sugar. Nice to have, rarely something I care about much. 4/10

    Path and Filesystem APIs: I tend to use these over the older File APIs, but just because it feels like I should. I couldn’t really tell you if they are better or not. Still overly verbose. Still insanely hard to set file permissions in a cross-platform way. 3/10

    Java 8 – 2014

    Lambdas: somewhat controversial at the time. I was very in favour of them, but only use them sparingly these days, due to ugly stack traces and other drawbacks. Named method references provide most of the benefit without being anonymous. Deciding to exclude checked exceptions from the various standard functional interfaces was understandable, but also regularly a royal PITA. 4/10

    Streams: Ah, streams. So much potential, but so frustrating in practice. I was hoping that Java would just do the obvious thing and put filter/map/reduce methods onto Collection and Map, but they went with this instead. The benefits of functional programming weren’t enough to carry the feature, I think, so they had to justify it by promising easy parallel computing. This scope creep enormously over-complicated the feature, makes it hard to debug issues, and yet I almost never see parallel streams being used. What I do still see quite regularly is resource leaks from people not realising that the stream returned from Files.lines() has to be close()d when you’re done—but doing so makes the code a lot uglier. Combine that with ugly hacks around callbacks that throw checked exceptions, the non-discoverable API (where are the static helper functions I need for this method again?), and the large impact on lots of very common code, and I have to say I think this was one of the largest blunders in modern Java. I blogged what I thought was a better approach 2 years earlier, and I still think it would have been better. There was plenty of good research that different approaches were better, since at least Oleg Kiselyov’s work in the early noughties. 1/10

    Java Time: Much better than what came before, but I have barely had to use much of this API at all, so I’m not in a position to really judge how good this is. Despite knowing how complex time and dates are, I do have a nagging suspicion that surely it doesn’t all need to be this complex? 8/10

    Java 9 – 2017

    Modules: I still don’t really know what the point of all this was. Enormous upheaval for minimal concrete benefit that I can discern. The general advice seems to be that modules are (should be) an internal detail of the JRE and best ignored in application code (apart from when they spuriously break things). Awful. -10/10 (that’s minus 10!)

    jshell: cute! A REPL! Use it sometimes. Took them long enough. 6/10

    Java 10 – 2018

    The start of time-based releases, and a distinct ramp-up of features from here on, trying to keep up with the kids.

    Local type inference (“var”): Some love this, some hate it. I’m definitely in the former camp. 9/10

    Java 11 – 2018

    New HTTP Client: replaced the old URL.openStream() approach by creating something more like Apache HttpClient. It works for most purposes, but I do find the interface overly verbose. 6/10

    This release also added TLS 1.3 support, along with djb-suite crypto algorithms. Yay. 9/10

    Java 12 – 2019

    Switch expressions: another nice mild quality-of-life improvement. Not world changing, but occasionally nice to have. 6/10

    Java 13 – 2019

    Text blocks: on the face of it, what’s not to like about multi-line strings? Well, apparently there’s a good reason that injection attacks remain high on the OWASP Top 10, as the JEP introducing this feature seemed intent on getting everyone writing SQL, HTML and JavaScript using string concatenation again. Nearly gave me a heart attack at the time, and still seems like a pointless feature. Text templates (later) are trying to fix this, but seem to be currently in limbo. 3/10

    Java 14 – 2020

    Pattern matching in instanceof: a little bit of syntactic sugar to avoid an explicit cast. But didn’t we all agree that using instanceof was a bad idea decades ago? I’m really not sure who was doing the cost/benefit analysis on these kinds of features. 4/10

    Records: about bloody time! Love ‘em. 10/10

    Better error messages for NullPointerExceptions: lovely. 8/10

    Java 15 – 2020

    Sealed classes: in principal I like these a lot. We’re slowly getting towards a weird implementation of algebraic datatypes. I haven’t used them very much yet so far. 8/10

    EdDSA signatures: again, a nice little improvement in the built-in cryptography. Came with a rather serious bug though… 8/10

    Java 16 – 2021

    Vector (SIMD) API: this will be great when it is finally done, but still baking several years later. ?/10

    Java 17 – 2021

    Pattern matching switch: another piece of the algebraic datatype puzzle. Seems somehow more acceptable than instanceof, despite being largely the same idea in a better form. 7/10

    Java 18 – 2022

    UTF-8 by default: Fixed a thousand encoding errors in one fell swoop. 10/10

    Java 19 – 2022

    Record patterns: an obvious extension, and I think we’re now pretty much there with ADTs? 9/10

    Virtual threads: being someone who never really got on with async/callback/promise/reactive stream-based programming in Java, I was really happy to see this feature. I haven’t really had much reason to use them in anger yet, so I don’t know how well they’ve been done. But I’m hopeful! ?/10

    Java 21 – 2023

    String templates: these are exactly what I asked for in A few programming language features I’d like to see, based on E’s quasi-literal syntax, and they fix the issues I had with text blocks. Unfortunately, the first design had some issues, and so they’ve gone back to the drawing board. Hopefully not for too long. I really wish they’d not released text blocks without this feature. 10/10 (if they ever arrive).

    Sequenced collections: a simple addition that adds a common super-type to all collections that have a defined “encounter order”: lists, deques, sorted sets, etc. It defines convenient getFirst() and getLast() methods and a way to iterate items in the defined order or in reverse order. This is a nice unification, and plugs what seems like an obvious gap in the collections types, if perhaps not the most pressing issue? 6/10

    Wildcards in patterns: adds the familiar syntax from Haskell and Prolog etc of using _ as a non-capturing wildcard variable in patterns when you don’t care about the value of that part. 6/10

    Simplified console applications: Java finally makes simple programs simple for beginners, about a decade after universities stopped teaching Java to beginners… Snark aside, this is a welcome simplification. 8/10

    This release also adds support for KEMs, although in the simplest possible form only. Meh. 4/10

    Java 22 – 2024

    The only significant change in this release is the ability to have statements before a call to super() in a constructor. Fine. 5/10

    Java 23 – 2024

    Primitive types in patterns: plugs a gap in pattern matching. 7/10

    Markdown javadoc comments: Does anyone really care about this? 1/10

    Java 24 – 2024

    The main feature here from my point of view as a crypto geek is the addition of post-quantum cryptography in the form of the newly standardised ML-KEM and ML-DSA algorithms, and support in TLS.

    Java 25 – 2025

    Stable values: this is essentially support for lazily-initialised final variables. Lazy initialisation is often trickier than it should be in Java, so this is a welcome addition. Remembering Alice ML, I wonder if there is some overlap between the proposed StableValue and a Future? 7/10?

    PEM encoding of cryptographic objects is welcome from my point of view, but someone will need to tell me why this is not just key/cert.getEncoded(“PEM”)? Decoding support is useful though, as that’s a frequent reason I have to grab Bouncy Castle still. 7/10

    Well, that brings us pretty much up to date. What do you think? Agree, disagree? Are you a passionate defender of streams or Java modules? Have at it in the comments.

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    12 September, 2025
    Java

  • No, no, no. You’re still not doing REST right!

    OK, so you’ve made your JSON-over-HTTP API. Then someone told you that it’s not “really” REST unless it’s hypertext-driven. So now all your responses contain links, and you’re defining mediatypes properly and all that stuff. But I’m here to tell you that you’re still not doing it right. What you’re doing now is just “HYPE”. Now I’ll let you in on the final secret to move from HYPE to REST.

    OK, I’m joking here. But there is an aspect of REST that doesn’t seem to ever get discussed despite the endless nitpicking over what is and isn’t really REST. And it’s an odd one, because it’s literally the name: Representational State Transfer. I remember this being quite widely discussed in the early 2000s when REST was catching on, but seems to have fallen by the wayside in favour of discussion of other architectural decisions.

    If you’re familiar with OO design, then when you come to design an API you probably think of some service that encapsulates a bunch of state. The service accepts messages (method calls) that manipulate the internal state, from one consistent state to another. That internal state remains hidden and the service just returns bits of it to clients as needed. Clients certainly don’t directly manipulate that state. If you need to perform multiple manipulations then you make multiple requests (multiple method calls).

    But the idea of REST is to flip that on its head. If a client wants to update the state, it makes a request to the server, which generates a representation of the state of the resource and sends it to the client. Then client then locally makes whatever changes it wants, and then sends the updated representation back to the server. Think of checking out a file from Git, making changes and then pushing the changes back to the server. (Can you imagine instead having to send individual edit commands to make changes?)

    This was a stunning “strange inversion of reasoning” to me at the time, steeped as I was in OO orthodoxy. My first reaction was largely one of horror. But I’d missed the key word “representation” in the description. Returning a representation of the state doesn’t mean it has to directly represent the state as it is stored on the server, it just has to be some logically appropriate representation. And that representation doesn’t have to represent every detail: it can be a summary, or more abstract representation.

    Is it a good idea? I’ll leave that for you to decide. I think it makes sense in some cases, not in others. I’m more just interested in how this whole radical aspect of REST never gets mentioned anymore. It suggests to me a much more declarative conception of API design, whereas even the most hypertext-driven APIs I see tend to still have a very imperative flavour. Thoughts?

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    9 July, 2025
    API, REST, Web

  • Streaming public key authenticated encryption with insider auth security

    Note: this post will probably only really make sense to cryptography geeks.

    In “When a KEM is not enough”, I described how to construct multi-recipient (public key) authenticated encryption. A naïve approach to this is vulnerable to insider forgeries: any recipient can construct a new message (to the same recipients) that appears to come from the original sender. For some applications this is fine, but for many it is not. Consider, for example, using such a scheme to create auth tokens for use at multiple endpoints: A and B. Alice gets an auth token for accessing endpoints A and B and it is encrypted and authenticated using the scheme. The problem is, as soon as Alice presents this auth token to endpoint A, that endpoint (if compromised or malicious) can use it to construct a new auth token to access endpoint B, with any permissions it likes. This is a big problem IMO.

    I presented a couple of solutions to this problem in the original blog post. The most straightforward is to sign the entire message, providing non-repudiation. This works, but as I pointed out in “Digital signatures and how to avoid them”, signature schemes have lots of downsides and unintended consequences. So I developed a weaker notion of “insider non-repudiation”, and a scheme that achieves it: we use a compactly-committing symmetric authenticated encryption scheme to encrypt the message body, and then include the authentication tag as additional authenticated data when wrapping the data encryption key for each recipient. This prevents insider forgeries, but without the hammer of full blown outsider non-repudiation, with the problems it brings.

    I recently got involved in a discussion on Mastodon about adding authenticated encryption to Age (a topic I’ve previously written about), where abacabadabacaba pointed out that my scheme seems incompatible with streaming encryption and decryption, which is important in Age use-cases as it is often used to encrypt large files. Age supports streaming for unauthenticated encryption, so it would be useful to preserve this for authenticated encryption too. Doing this with signatures is fairly straightforward: just sign each “chunk” individually. A subtlety is that you also need to sign a chunk counter and “last chunk” bit to prevent reordering and truncation, but as abacabadabacaba points out these bits are already in Age, so its not too hard. But can you do the same without signatures? Yes, you can, and efficiently too. In this post I’ll show how.

    (more…)

    Share this:

    • Email a link to a friend (Opens in new window) Email
    • Print (Opens in new window) Print
    • Share on Facebook (Opens in new window) Facebook
    • Share on Reddit (Opens in new window) Reddit
    • Share on LinkedIn (Opens in new window) LinkedIn
    Like Loading…
    2 July, 2025
    authenticated encryption, cryptography, public key encryption, streaming encryption

Next Page

Blog at WordPress.com.

  • Subscribe Subscribed
    • Neil Madden
    • Join 64 other subscribers
    • Already have a WordPress.com account? Log in now.
    • Neil Madden
    • Subscribe Subscribed
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar
%d