Bug larping for fun and profit
So it's been almost a year since I made the jump from team offense to team defense.
I spent so long in a pure offense focused org that the pivot into thinking about defensive problems was a bit jarring at first.
Not in the least because the economics of bug hunting in offense vs bug hunting in defense are completely upside down.
Offense is goal oriented. You establish what you want to achieve, and audit accordingly. Tending to your attack surface garden until you've grown all the ingredients you need to complete your soup of the 0day.
On the defense side you're stuck trying to extrapolate from the ghosts of haxmas past. And while there's certainly heaps of known and novel attack research out there these days … at the end of the day you are going up against an attacker who's goals, timelines, and resources are essentially unknown.
And that blind spot still makes all the difference.
So we play the mitigation game.
Assume the code is borky and plop it in a sandbox, a hypervisor, a chromebook wrapped in tinfoil running on the neighbors wifi …
Shuffle the memory. Make it non-executable. Hell, make it non-readable! Sign the pointers. Validate the control flows. Cookie all the things. Heap integrity checks. SMEP/SMAP/SMOP/SMUP …
Oh right, the hardware. Assume the hardware is broken. De-optimize. Slow things down. Duplicate all the things at every privilege level.
Prevent the side-effect.
Stop the unknown.
PULL THE PLUG!
Phew.
Quite a bit of advanced exploit development happens out in the open now, and Arxiv is, apparently, the new Phrack.
Who woulda thunk it?
You can even get a PhD based on offense research these days. And not some handwavy diatribe that yammers on about polymorphic shellcode detection either, no, the real thing.
Systems engineering teams try to distill the lessons on display into ever advancing mitigations.
Exploit teams accept the challenge.
Round and round it goes, and the complexity bar keeps getting raised.
If you missed the era where most of this stuff happened out of sheer rage or curiosity in the dark corners of IRC channels … that sound you hear in the background is the slightly nervous chuckle of 40-something hackers world wide as they start to worry about aging out of the game.
It's easy to confuse complexity for efficacy, especially when they're equally headache inducing.
There's a lot of talk about the importance of establishing exploitability in some sort of automated way. The idea being that you can only properly prioritize which bugs to fix through proven exploitability.
I go back and forth on this myself.
The obvious argument is that yes, if a bug has provable security impact, then it deserves to be prioritized.
I think where it gets tricky is with the idea of exploitability. Exploitability is a weird term. It sounds like an objective valuation.
A true/false.
But exploitability is really a function of subjective attacker motivation, experience, resources, and skill.
So, for all intents and purposes, the best you can do is establish exploitability based on what you, the defender, know about exploitation in relation to how and where your assets are deployed and under which threat models you practically operate.
Ben Hawkes has some solid thoughts[1] on the subject in terms of "equivalence classes". Match the conditions in which bug A manifests itself to the known exploitable conditions of bug B and if there's sufficient overlap then you can make a reasonable assumption of exploitability.
If not, then go pay someone to try and write the exploit and stack the result of that effort into your hackonomicon.
Many moons ago I wrote a whitepaper[2] for Microsoft on the subject when they first came out with the "exploitability index". I tried to argue along similar lines then, but I think the main difference anno 2020 is that the quality of the public body of exploitation knowledge is much higher.
Julien Vanegue recently also made some interesting arguments[3] in favor of exploitability-based triage prioritization and where the state of the art is currently lacking.
All debate aside, his exploitability Venn diagram is much prettier than mine.
One of the harder practical problems to tackle in the exploitability realm is that the problem space starts to explode when you combine platform configurations with software ecosystems.
Every layer of the attack surface vertical introduces a massive growth of potential exploitability that you now have to reason about.
VuSec's recent work[4], which combines their speculative execution vulnerability research with a more traditional software vulnerability, is a great example of what happens when someone goes "now kiss!" to security assumptions based on disjointed threat models.
So probably the best you can hope for is some limited interpretation of Ben and Julien's ideas, and apply it to some specific subset of your code, in some known and static platform configuration. The more you can limit the threat model exponents, the more practical automated decision making regarding exploitability becomes.
But, that does not factor in the transubstantiative nature of bugs in terms of exploit development. Many successful exploits are in fact a combination of bugs. Independently those bugs might not represent an exploitable vulnerability, but in concert, they do.
Or, even more fun, one bug becomes multiple exploit primitives. Julien touches on this a bit in his section on the flawed thinking around "Approximate Exploitability". To truly make informed decisions about exploitability, you have to enumerate all side-effects, of every bug, in every combination.
This makes an exploitability focused triage process much trickier, because now you need combined bug awareness. The kind of awareness that requires focused audits to try and find the various pieces to complete the attack puzzle.
A more practical approach is probably to evaluate bugs for their viability as an exploit primitive. Sean Heelan's research[5] in the AEG realm comes to mind.
But even then, it's hard to generalize the problem away from very specific platform and configuration constraints. You would have to reinvent the same wheel for almost every platform and software stack out there, each with its own corpus of exploitability knowledge.
Which brings us back to the bugonomics of defense vs offense. If I'm an attacker, I obviously care about exploitability. That's how I get things done.
But if I'm a defender, in the developer sense of the word, then I probably care about general code quality more.
Bugs, and bug density, are symptomatic of code quality. The less bugs, the less potential exploitation primitives … the less potential exploitation primitives, the less chances of successful exploit chains.
Vulnerabilities are just rebranded bugs, after all. Nu pun intended.
So, the things we need to do to improve code quality are, by definition, the same things we need to do reduce vulnerabilities. If that relation holds in both directions, then perhaps there is more bang for buck in pipeline strategies that focus on code quality and triage speed as a whole?
I do think it's important for developers to understand the ideas behind exploitation. To have a firm understanding of how input has influence on your code, and how sometimes that influence can pivot into behavior that benefits an attacker.
But we turned the side-effect into its own thing. Glorified it. Built careers out of it.
So here we are.
Knee deep in bugs, trying to convince the world that the ones we understand the best are the most important ones. Chipping away at them one email notification at a time … hoping that whoever receives them … agrees.
waves hands furiously
Love,
Bas
- [1] https://twitter.com/benhawkes/status/1298000068436324352
- [2] https://anti.computer/random/microsoft-exploitability-index-whitepaper-2009.pdf
- [3] https://openwall.info/wiki/_media/people/jvanegue/files/soundness_of_attacks.pdf
- [4] https://download.vusec.net/papers/blindside_ccs20.pdf
- [5] https://seanhn.files.wordpress.com/2019/11/heelan_ccs_2019.pdf
Blaaaaaaag
So after experimenting with Twitter for a few months I've come to the following conclusions:
- I'm too easily distracted to have anything of the sorts on my phone
- I feel very uneasy with many-to-many short-form ephemeral comms
As such I've decided to quietly retire my (mobile) Twitter and just rant to an audience of one (hi mom!) on here instead. Historically I would employ the DailyDave as a transport for my nonsense, but I'm no longer employed by uncle Dave and it would feel kind of weird I think.
So, obscure blog in random corner of the Innerwebs it is!
To that end, I spent a little time setting up a convenient publish pipeline out of Emacs using org-static-blog (https://github.com/bastibe/org-static-blog).
It has exciting features such as … umm … none … it just turns org files into html. No comments. No notifications. No nothing.
Perfect.
rantrospective
note: this rant is best read to Placebo's "Too Many Friends" on repeat: https://www.youtube.com/watch?v=p21YfobjaVA
I'm turning 40 in a few months, so I figured now is as good a time as any for a rantrospective of sorts. To revisit some old assumptions, and perhaps state some new ones for a future that, as is tradition, seems uncertain at best.
I've used the infosec rant as a vehicle to spout my opinions on a variety of industry related things over the years. Some of it for entertainment and some of it for dramatic effect through what is probably best described as infosec edgelording … but most of it simply because I enjoy writing. I enjoy spinning yarns, to take your mind to places that might be a bit uncomfortable or counterintuitive, to argue the extremes in an effort to make sense of the in betweens and hopefully to put a grin on your face in the process.
Like most things in life, the way I feel about our industry has been a work in progress. My opinions are about as valid or invalid as anyone else's who collects a paycheck off of what we do, or claim to do, anyways.
But, it's almost 2020 … and here we are … our exploits are killing people and our memes are winning elections. Holy hell.
The corporation has replaced the nation state but noone seems to have caught up to that fact yet. The only real borders are those between ideas and ideologies. Fragmented consensus has replaced reality, a multiverse of opinion-fueled echo-chambers with which we seem to identify more than any physical geography.
The stars on our flags replaced by grinning emojis and animated gifs, we pledge allegiance to our feedback loops. The likes, the shares, the crashes of our fuzz farms, Twitch streaming ourselves into the Ethernet, because surely it matters to someone out there. The audience.
Life is a recorded performance. Every bit of it tagged and archived. Your greatest hits available to anyone, on demand. Hash tag what the fuck.
Who we are now is fluid. You are an idea. Data. Code. Malleable. Each of us wandering around the edges of a personalized information dystopia, desperately looking for that special something that will Turing complete us.
I'll make you a mix tape.
I used to believe I was an information anarchist … but I've begrudglingly accepted that if anything, I'm probably an information nihilist. Or maybe that's just a cool way of saying I feel like I'm being taken for a ride most of the time. But, true to form, I suspect it doesn't matter much either way.
You're about two paragraphs away from forgetting everything you just read.
That's most likely just brand loyalty to the logos but overall I make for a pretty shitty stoic so whatever.
Every so often I try to restate my assumptions. I attempt to reflect on my core opinions on most things to see if I should adjust them. It's all too easy to hang on to strong opinions just because you've identified with them for so long. Or because you've surrounded yourself with people that ascribe to the same belief systems. Or just because, deep down you realize, it pays the bills right now.
As I look back on some 24 years of bit fiddling and I look at the world around me, I can only come to the conclusion that, at least for my generation of hackers, the punch line has finally arrived.
Our world is subjectively falling apart, which is probably better than it objectively falling apart, but it sure doesn't feel like it. The things that used to be jokes to me don't seem funny anymore. I see real pain everywhere, I see people at each other's throats, and I see people struggling to stay sane under a suffocating blanket of input induced paralysis.
The illusion of choice disguised as a Twitter poll.
Maybe this is just how midlife crisis manifests itself for me, and if so I'm pretty bummed I didn't get the leather jacket and 2nd-hand Corvette option, but either way it's put me in a bit of a reflective mood.
I have a history of poking fun at the more bombastic manifestations of full disclosure as a security mechanism. Mainly because I don't enjoy security theatre much.
Pause for dramatic effect.
While ad hoc bug fixing certainly, you know, fixes bugs, it's by no means a systemic approach to software security. I suppose I lean towards the not so popular Linus Torvalds school of thought when it comes to bug fixing and how much hooha needs to surround the process. While I think security impact certainly warrants triage priority, for all other intents and purposes a bug, is a bug, is a bug.
Instead of celebrating a single instance of a bug, that time and energy is probably better spent checking for variants of that bug. Lest we end up with an endless summer of marketing campaigns for essentially the same vulnerability patterns.
Over, and over, and over, again.
But I get it. There's product to sell. Grad schools to finish. Likes to be harvested. I'm just as guilty as anyone else. Please share and subscribe.
There's an interesting discussion to be had about whether or not software security is a side effect of quality or if it's its own thing. If a piece of software performs well, robustly so, does the fact that there exists some unintended states in that software detract from its quality?
Depends on your definition of quality I suppose. I've always argued that our industry is just a weird offshoot of quality assurance. We are creative debuggers exploring unintended state space. Where we differ is that we revel in said unintended state space. We built an entire industry on top of side-effects. Offense, defense, and everything in between. It's all built on what isn't supposed to be.
While the moral and ethical venn diagrams of the various factions of the security research community are surely fascinating, I do believe that anno 2020 we mostly perform the same labor.
We explore the unintended state space of information systems in an attempt to subvert the intended behavior of those systems. The fork in the road happens AFTER that initial labor. Team offense will often spend months building their desired weird machine on top of that first subversive foothold, and while some of team defense might do the same they ultimately aim to remove that unintended behavior from the equation alltogether. Rinse and repeat.
The state spaces we are dealing with are enormous. One of my favorite talks on the subject is the late Joe Armstrong's "The Mess We're In" [1]. In it he explores how, as software engineers, we've accumulated an untenable amount of state space and the technical debt that's resulted from that is haunting. A simple C program with six 32-bit integers has more states than the number of atoms on the planet.
And that's just the software. Stack that with the software building the software, the software running the software building the software, the hardware running the … well you get my drift. This fantasy world in which we operate is almost limitless and the possibilities are endless.
That's probably bad news for security in the traditional sense. People have told me that most of my thoughts on this subject are just regular security assurance lore. That security is not some defined line, but rather a gradient of intent and probability. I guess they're not wrong. Probably.
But, if much of what we do is exploring unintended state space, and that unintended state space is for lack of a better term … "yuuuuuuge" … the deck of cards certainly seems to be stacked in team offense's favor doesn't it?
Maybe. But more on that later.
Manually auditing to find a handful of usable bugs is a workflow that serves team offense just fine … they only need the one win … but it is, for all intents and purposes, a horribly non-systemic approach for team defense.
I've referred to this as vulnerability masturbation in the past. Probably not the most eloquent of terms, but I believe it carries the point nicely.
Having said that, I have evolved my opinion a bit on the use of so called 0day research teams as a driver for defense. They have, in a lot of ways, delivered the seeds and insights that team defense has needed to drag themselves out of the dark ages. The influx of legitimate offensive talent into the public realm has most definitely reduced the pinata-factor on the defense side of the software security house. Turns out that a lot of people just needed to realize what the ball looks like before they could run with it.
Fair play.
So I'll concede that point to Ben Hawkes et al. I was wrong. You were right. While I still don't think that fixing single instances of vulnerabilities has much practical impact on systemic security, the work coming out of the p0 teams, and teams like it, have lead to a new generation of hybrid defender. One that isn't looking for zebra stripes when hunting for tigers. That is a tangible defensive success and it is a direct result of this effort, and efforts like it.
I've also come to appreciate that you can use a team like that to simply keep pace on a given surface inside of the operational loops of an offensive team. If you can fix bugs inside the R&D windows of your offense adversary, you can win for some definition of winning, or at least hugely frustrate the offensive effort. But to do that effectively, it requires committment to surfaces beyond "let's audit this thing for a bit and then move on".
Effective does not always mean efficient and realistically you need "adopt a highway" style dedication to e.g. Whatsapp just to keep pace with an outfit like say, NSO group. That tells me that doing this work at the tail-end of the development pipeline is probably not where the bang for buck is from a defensive perspective. Once code ships it has sailed, and everything from that point on is just playing catch up to a collection of unknown adversaries with unknown abilities. You lose the initiative.
Relying on the vulnerability research altruism of 3rd parties who do not own your code, no pun intended, does not scale effectively. But you can use the work of those 3rd parties to distill and replicate the essence of their work at a more suitable point in your development pipeline. Instead of going "ok cool we'll fix this one thing", you go "alright, lets tag and bag everything that smells similar". If you have the budget, get your own platoon of xdevs, there's plenty of 'm out there now. Strap them into your CI/CD pipelines and make them play nice with the rest of your engineers.
Or you can keep relying on the kindness of strangers too, I suppose.
Either way, offense and defense are much more fluid ecosystems now. The old lines in the sand are gone. Not in the least because an entire generation of exploit developers have since migrated over to defense or wherever else the job is. That's only natural I suppose. Likewise I'm sure there's a new generation of scene kids rifling through our mailspools, or signal messages, or whatever, as well.
On "freedom vs security" issues, I remain convinced that we should not try to legislate our way to security. There's just too many unforseen consequences to trying to treat code as anything other than speech. Especially as long as there's Giuliani-grade experts [2] involved in any of our legislative and policy processes. Any time you try to codify what math is or isn't allowed to do, you are playing with the kind of backfire that can burn you for generations. Once you concede a personal freedom to the state for the sake of a perceived increase in security, it's very hard to get it back.
In any event, when it comes to software security as an altruistic pursuit, if you're an Open Source bug hunter that actually wants to make an impact, perhaps consider becoming a regular contributor to the projects you care about and babysit their git commits for the long term.
And if you're on team offense? Well, consider becoming a regular contributor to the projects you care about and babysit … well … nevermind.
Cue fourth wall breaking meaningful stare.
For me personally? I don't know. It's surely still all just puzzles, but some of those puzzles are more and more starting to feel like the lament configuration. Every side locking into place feels like we're moving towards something much more sinister than we ever intended.
So … maybe, just maybe, some puzzles are best left unsolved.
Close curtains. Applause.
[1] "The Mess We're In", Joe Armstrong, StrangeLoop 2014, https://www.youtube.com/watch?v=lKXe3HUG2l4
[2] Anything Giuliani has ever said on cyber anything, ever.
An Immunity XMAS
NOTE: As of 12/23/2019 I have moved on from Immunity, Inc.
I've worked at Immunity, Inc. for a long time. 15 years now. That's a lifetime in tech and almost unheard of in the infosec industry it seems.
I often get asked why. Why so long with a single company? This is where I'm supposed to give the elevator pitch for how amazing the culture is, and how every day is a joy filled near-religious experience of comradery and team cohesiveness.
But that's all nonsense, of course. While I love my team, past and present, what it really boils down to is that Immunity provides an environment where obsessive problem solvers are fed a non-stop stream of puzzles to solve. It's a personality trap.
I'll explain with a trip down memory corruption lane.
Rewind to 1991. I grew up in a tiny village in the east of Holland in which I spent my early teens cracking C64 games. That was the result of living down the street from someone that just happened to be involved with a very large international "warez" group.
What are the chances, eh?
The odd thing about the area where I grew up is that it's always been full of slightly dubious technology enthusiasts. My father ran a pirate radio station for the longest and to this day the region is a hotbed for pirate radio activity.
Ironically my Dad's name is Arrie…. ok, nevermind.
At 11, my weekends usually revolved around visiting the "computer club" one village over to copy and "crack" software, all the while pouting at the Amiga people because we couldn't afford one. I say "crack" because I was a kid and a lot of the things I was doing were just dumb copy-bypass party tricks. Having said that, the room was full of legit crackers and demo-scene people. All in this little countryside community. Go figure.
As a result of this early exposure, my attitude towards computers became a tad adversarial. They should do what you wanted them to do. If software got in the way of that, well then that was just a wrong you had to right yourself. By any means necessary.
I mean it's YOUR computer, right?
When I was 16, the aforementioned warez role model of my early teens got raided and arrested. This is 1996. By this time I was a fervent UNIX enthusiast and quite the little terror on IRC. I spent a lot of time C and assembly programming for all the reasons teenagers with my background and disposition generally spend a lot of time C and assembly programming.
I mean it's OUR computer, right?
A year later, 17 year old me goes off to Journalism school. By the time I graduate in 2001, I am 4 years deeper down the exploit development rabbit hole because, as it turns out, Journalism school isn't the most academically demanding endeavor in the world. I don't regret it though. You learn all sorts of useful things in Journalism school. I can LexisNexis with the best of them and my search engine game is on point.
Why no CS you wonder? Well, 17 year old me figured I shouldn't mix the things I did for fun with the things I'd do for a living. That way the things I did for fun would, you know, stay fun. 17 year old me was probably wiser than I give him credit for in that regard, but I digress.
I graduate and after some time on a newsdesk for a regional paper I decide to go full freelance. With that decision I find myself in the midst of the perfect storm. I'm 21 with a job I can do from home and a lifestyle that requires little to no money. As a result, the next 3 years are a bit of a redacted blur and we arrive in 2004, the year I joined the Immunity team.
I knew Dave Aitel from an IRC island. I'd helped him out with some projects but I wasn't necessarily interested in joining the infosec industry. I knew he ran his own company but xdev was something I did for fun and that was the end of that.
So, like a true infosec professional the strength of my convictions had my suitcases packed by March 2004 and I was off to go sling exploits for uncle Dave.
Hello Immunity, Inc.
I was now 24 and the last 8 years of my life revolved around Linux and UNIX exploitation. I hadn't touched a Windows system since, well, ever. Home PC wise I went from DOS to Linux. The only Windows experience I had was the result of the odd support mission for a family member.
Dave knew this. So his first task for me was to … "write me a Windows 2000 remote".
Haha, ok Dave, sure.
Turns out that he was, in fact, dead serious. Shortly thereafter was also the first time I heard him go: "is it done yet?" … a phrase that would haunt me, and many of my colleagues, for years to come.
In fact, for a good time, repeat that question to any Immunity alumni and try to catch the ever-so-slight twitch in their facial expression.
In any event, this request started my somewhat awkward relationship with the Windows operating system in that, for a long time, the only interaction I had with Windows was to write exploits for it. I wrote a Windows debugger with Python bindings called PyDB but I was the only one to ever use it I think. This was pre the OllyDBG fork that became known as Immunity Debugger. PyDB was what I used for things like "where'd my bunch-o-A's go?" and it supported most of my userland debugging on Windows for quite some time.
Of course, as is tradition, I have no idea where the PyDB source code went, but I hope it's happy somewhere.
Anyways, over time I came to realize that Dave's thing was to take people out of their comfort zones. No experience working on compilers? Guess who's porting MOSDEF to SPARC. Anxious about public speaking? Guess who's teaching a class next week a thousand miles from home. Solo. You don't like doing web stuff? Guess who's handling the next 3 months of web-app consulting pipeline.
Early life at Immunity was a constant exercise in sink or swim, and that's how another Immunity staple came to be: The Immunity NOP certification, as a throw back to those sink or swim days.
The idea behind it was simple. Put someone in a time pressured environment, with tooling that either sucks or is unfamiliar to them, and have them perform a task that they would otherwise breeze through. Hilarity ensues.
We ran the NOP as an interview for the longest. Just like my first week, the original version was based on a simple Windows 2000 remote stack overflow. The catch was that you got little to no tooling, a slightly broken debugger, and a bunch of people staring at your screen giving you helpful tips such as "have you tried applying Computer Science yet?"
It sounds a bit like hazing bullshit in writing, but I think it was always fairly good natured. If someone really got stuck there'd usually be an actual helpful hint or two buried in the commentary.
All sorts of interesting personality quirks come out under that kind of pressure. Anything from "I can't type on this keyboard!" to full on rage quits. As time went on, we became masters in curating only the finest of crappy input peripherals. Slightly off-center keyboards, 0-dpi mice, just the right amount of input lag.
We had such sights to show you.
To what end? When we were still using the NOP as an interview, the goal was to find people that would adapt to the environment in front of them. Not finishing the exercise was never a disqualifier, but giving up on the exercise due to discomfort or unfamiliarity was a big red flag in our eyes.
Over the years I became less and less confident that the NOP actually had value as an interviewing tool. I mean it beat drawing linked lists on a whiteboard or playing rote-memory bingO with algorithmic complexity, but snapshot performance isn't really that good of a metric to evaluate potential team members.
Ultimately we stopped interviewing people in that way, and we moved the NOP into the conference circuit as a bit of a sideshow. Which is where it remains to this day. The format has even been picked up by outfits like Raytheon, who do a way better job with it than our old "shitty laptop and a notepad" approach.
Personally I kind of like the ratty simplicity of the no-branding NOP table. No banners, no merch, just you and 30 of your closest friends questioning your every move. Praying to god you at least beat Dave's time.
So the NOP, in many ways, is a very personal thing to me. Every time I run one, and I see someone struggle, I think back to that first week. Alone in a new country, starting a career I had zero experience in, with noone to fall back on.
Those are usually the times where I'll whisper a small tip in between the heckling. Because the goal isn't for you to fail, the goal is for you to realize discomfort is nothing to be afraid of. That it's part of the process. That you belong.
I'm 39 now. Tick tock. Sink or swim.
The dream of the LISP machine is alive in the 90ies
I ate some bad chicken last night.
Really it all started a few days ago when I saw a chick-fil-a commercial about their heart shaped 30pc nugget Valentines day special. That's where that particular piece of data first entered my system.
I didn't think much of it at the time.
If you're wondering how I could let delicious chicken trump my ethics I would counter that, if you're reading this, you are probably an information security professional as well.
So, as it turns out, heart shaped chicken nugget containers are a super popular token of affection in the greater Miami area, and I ended up with 2 spicy chicken sandwiches instead. Well that and a 12pc nugget combo. And a small vanilla milkshake. Nothing says I love you like small vanilla milkshakes.
I don't know if the chicken was bad, per se, or if I'm just too Northern European to deal with spicy things, but here we are. 5am. My wife and dog are both vast asleep and I'm thumbing this on my phone in an email to myself. Waiting for the waves of agony to pass.
Ironically this is probably the only way you'll get a DailyDave post out of me these days. Locked in the bathroom, tethered to a phone that I'm not allowed to browse the Internet with. Also, you can only read the back of the wet wipes container so many times, so let's have at it.
I like to think I'm as good an armchair philosopher as anyone else that watched that first season of True Detective.
So let's talk about data as code or, more generally, input as influence.
Halvar has always been great at formalizing the intuitions of a certain generation of exploit developers.
His ideas about programming the weird machine harken back to the operating paradigms of the LISP machines of lore, but in a way also formalize a class of thinking that is fundamental to not only exploit development, but input based influence on algorithmic processes in general. I would posit the concept extends far beyond the realm of computing. Team Russia is programming the weirdest machine ever right now and to astounding effect. Treating an entire populace as a programmable entity through orchestrated manipulation of its information sources.
Input as influence.
I don't know if algorithmic process actually means what I think it does. I got the Knuth boxset, but mostly because Amazon had that pricing glitch on it way back when. In fact I got two. One for me, and one for a friend. He, in fact, did read all of it. We lost touch because he's insane, but I think he's out there in the world somewhere doing things more productive than porting PaX to MIPS these days, or at least I hope he is. I'd like to think I had some small part in that. Or at least that me and Jeff Bezos together, did.
Tangent aside, having a novel thought on exploitation is hard. Heck, having a novel thought, period, is hard. Hive mind and all. Halvar would probably be the first to tell you that a lot of what he's saying is a mere representation of a certain zeitgeist. Not in the least because he's German and Germans say things like "zeitgeist". That and "schnitzel".
But what you perceive as hard is relative. Generally a result of context, experience, repetition, intuition, aptitude, and whether or not someone actually told you something was hard to begin with.
It's about having insights that simplify your understanding, and carrying those insights forward into ever more intricate layers of what it actually is you are doing. Abstraction as simplification. Collapsing large ideas down into building blocks. Most of those ideas probably aren't your own. But it's hard to be novel, or so I hear.
Sometimes all it takes is a simple context switch.
I remember when I first started looking at heap corruption as a kid and, growing up in a small Dutch village in the 90ies, my access to formal computer science resources was pretty limited. I had an old beat up copy of K&R and I knew some assholes on IRC. That's about it.
Over and over I'd juggle logical vs physical representations of memory layouts in my head. I remember looking at something as simple as an unlink operation, and all the pointer dereferences would just not stick without me having to look them up constantly.
For a while I just committed entire chunks of the allocator algorithms to rote memory. Writing down the parts that were relevant to the primitives I cared about over and over. That worked fine. I was never a Dvorak or a Scrippie, but I was stubborn, angry, and obsessed with problem solving. Enough to get the job done.
Unlike the Germans, the Dutch don't say things like "zeitgeist", but we do say things like "schnitzel".
Anyways, one day instead of trying to treat my brain as a glorified C compiler, I just started visualizing the ideas behind what it was I wanted to do. Imagining a chunks as just buckets of data that I either fully or partially filled with liquid I controlled, coloring the logical and contiguous memory layouts in my head accordingly.
Then it became easy. I mean you still had the implementation details of whichever allocator you're dealing with obviously. Sizes here, pointers there, bitmasks up and down. But that's just typing on the keyboard. That's stuff you SHOULD look up by reference. Concepts and big picture understanding are things you should be able to juggle in your head.
And at the risk of fetishizing Halvar quotes to Dave Aitel levels, I remember his original quests to visualize all the things because "humans are designed to recognize the shape of the animals they hunt, and turn them into schnitzel, not stare at numbers". I paraphrase.
Our generation of hunters just happens to track game around allocators, kernel surfaces, and logic flows.
Of course they teach you that stuff in week 1 of most programming classes. Linked lists operations that is, not German hacker philosophy.
But I'm in my teens in a country side village in the east of Holland. The closest thing I have to a computer science curriculum revolves around people messaging me on IRC about how I should really start grepping for syslog(3) calls that don't have quotes in them. The closest thing I have to philosophy classes are Scandinavians preaching about the gospels of non-disclosure. A potent recipe to build weird minds, really.
Halvar things aside, I think that is the difference between someone that looks at e.g. the V8 codebase and is like "oh Christ that is entirely too much C++" and someone that, in a fairly pragmatic fashion, just identifies where data is input, how that data is acted upon, and how the algorithms acting upon that data are influenced by said data. The ability to visualize, collapse, and abstract whatever does not matter to the problem you are trying to solve, away.
Unless you're a fuzzer of course, in which case I harbor some weird late-90ies resentment towards you that is based on nothing but spending too much time with a generation of hackers that read "fight club" like it was the only thing available after eating some bad chicken.
Hard is relative. We dance the same dance over and over again in a hall full of ghosts of exploits past. I am Jack's haunted ball room picture.
One of my old friends who is probably one of the best exploit developers I've worked with, never went after "hard" bugs. He just found what he referred to as "simple bugs in hard to reach places".
That is a powerful concept. If you're willing to dig deep enough, shovel enough reverse engineering work, go deeper into the coal mines than anyone else before you, chances are you will win. I mean you might have black lung by the time you do, but still.
Of course one of my other friends is the exact opposite and he insists on finding ridiculously complicated bugs in simple places. I think it kind of depends on what you're after. These days with the Google borg spinning up trillions of cores and Google bucks to fuzz input into whichever surface they can get their grubby little hands on, I'd tend towards simple bugs in hard to reach places.
Having said that, beyond code isolation, rapid advances in branch coverage by modern fuzzing approaches kind of moot both. But, it appears that anno 2019, someone can still comfortably sit on a pile of 0day in the same code bases that everyone else is fuzzing CVE's out of. What is today's CVE if not yesterday's 0day?
It's probably some intricate dance between code flux, bug density, surface exposure, and whether or not there's a strategy behind the effort beyond "this is what Tavis felt like auditing this weekend".
Most of computer science is this weird struggle to emulate our own biological processes either consciously or subconsciously. It makes sense because it's the only way we know how to solve problems, perceived or otherwise.
Our problem solving paradigms are based on our biology. Collectively pushing towards an understanding of the world through the filter of being a part of it. It's hard to stay objective, if objective even exists. Consensus based reality is probably the best you can hope for.
But with the right inputs, it would appear you can influence almost anything to the point of control. Programs, people, society … and, perhaps, maybe even reality itself.
It's 7am. Time for work.