When I was 11 I figured out how to modify the control board on my dad’s VCR so that we could use an old fashioned clock to set up when it should start and stop recording. This was my first…
I work in an agency, and I’m here to talk about in-house vs me. But don’t think I’m here touting for work, I’m actually very lazy and try to avoid it where I can. For many years companies have organised specialist…
There are three important things you need to understand about artificial intelligence:
- It exists now
- It’s more capable than you know.
- It will replace you (unless you redefine who you are).
I’ll examine each one – it’ll be interesting to see what you think.
It exists now.
Artificial intelligence, as a concept, has been around for a long time. From Hephaestus building the “fighting machines of the gods” to Mary Shelley’s Frankenstein, humans have thought about and created stories around our desire to replace the mighty gods with ourselves for several thousand years.
We see it today with modern franchises like “The Terminator” and “The Matrix”, and less recently with “2001: A Space Odyssey” and “Do Androids Dream of Electric Sheep/Blade Runner”. AI is so popular and accessible as a concept in mainstream media that if you ask people what they think it is, they think they’ll be able to answer.
But here’s the rub – it’ll sound something like “Killer Robots in the Future!”. Rarely do you hear anyone mention the effect of AI on the stock market or the Amazon website. I find this lack of awareness about AI frustrating and frightening because AI exists now, it has existed for decades, and it impacts almost every aspect of our daily lives.
The birth of AI as a research area happened in 1956 at Dartmouth College in New Hampshire as a small but well-funded programme that hoped to create a truly intelligent “thinking machine” within a generation. They failed of course, as creating intelligence isn’t particularly easy. But they laid the foundations. With others following, the acceleration of our understanding and the number of practical uses for AI has increased. And like most technologies, the rate of improvement in AI can be plotted as an S curve.
Technologies tend to have a slow adoption rate early on as a result of the limited capabilities they offer. As the offer increases, so does the adoption rate. Unlike exponential growth, the S curve understands and plots the reduction of the technology’s popularity as we find its maximum potential, or as market forces push funding into new technologies which will ultimately replace it.
A better way to picture the impact of a specific technology on our lives is as a game of chess. At the start of the game, the choices you make are small and of little significance. You can recover from a mistake. But by the time you reach the middle of the board every decision you make will have large significance, and each creates a win or lose situation. Games of chess also follow the S curve.
By the time a technology reaches the lower midpoint of the curve it starts to have a major visible impact, with the velocity of that impact suddenly starting to increase. Artificial intelligence is now at that point. AI is not just here, it’s a punk teenager and pissed off at it’s parents.
It’s more capable than you know.
You interact with AI and one of its children, Machine Learning, every time you use any web-connected device. You do it every time you search, shop online, fill out a form, send a Tweet, or up-vote a comment. It even happens when you buy petrol, turn on the tap, drive your car to work, or buy any newspaper. Every aspect of your life is measured, stored, and used at some point by an algorithm.
Everything you do while living your life is kept as data so that machines can later parse it and use it to identify patterns. The effect is huge.
Roughly 70% of the world’s financial trading is controlled directly by 80 computers that use machine learning to improve their own performance. They can recognise an opportunity and carry out a purchase or sale within 3 milliseconds. The speed at which they operate means that humans are not only incapable of being part of the process, but have been designed out of the system completely to reduce error.
AI is rapidly getting to the point where it is better at diagnosing medical conditions than teams of doctors. Every patient report, every update to a patient’s condition, and every case history is available as digital data to be parsed, analysed and scored in real time to diagnose conditions that require a breadth of knowledge no single person has. In one case from Japan, AI was used to solve (in 10 minutes) a cancer diagnosis that oncologists had failed to detect (the human doctors had spent months trying).
Statistically, computers are better drivers than people are. In the 1.4 million miles Google’s fleet of self driving cars have covered on public roads, “not one of the 17 small accidents they’ve been involved in was the fault of the AI”. There’s the Google car driving into a bus that happened recently, but deep analysis of the incident showed that the bus actually drove into the car. A study by Virginia Tech showed that Google’s autonomous systems are 2.5 times less likely to have a car crash than a human. Given some of the behaviour I’ve experienced on the roads, I think this is a pessimistic number. AI is also being used to fly planes, with pilots of the Boeing 777 on average spending “just seven minutes manually piloting their planes in a typical flight” (anonymous Boeing 777 pilot). The United States and British governments have had fully autonomous drones flying for well over a decade.
Computers are now writing articles, poems, and even screenplays. Netflix’s now famously complicated taxonomy may have been put in place by people, but it’s machines that use it to work out what the next hit TV show will be. Associated Press uses AI to deliver over 3000 reports per year, while Forbes’ earning reports are all machine generated. These aren’t lists of numbers – this is long-form copy. Many sports reports are now written using AI, and they are published instantly as soon as the game ends. Before the team has left the field, the articles are being read. A study by Christer Clerwall showed that when asked to tell the difference between machine or human-written stories, people couldn’t. I mean, can you tell which parts of this blog were written by a machine?
Computers are better at designing their own components than people are. In the 1990’s Dr Adrian Thompson of Sussex University wanted a test on what would happen if evolutionary theory was put to use by machines in building an electrical circuit. The circuit was simple – all it had to do was recognise a 1KHz tone and drop the output voltage to 0, or for a 10KHz tone an output of 5 volts. An algorithm iterated over 4000 times before finding the best possible circuit. The circuit was tested, and it worked perfectly. The surprising thing though was that nobody could explain how the circuit worked, or manage to produce a better one. This experiment has been repeated many times, with more and more complexity introduced, and each time the machines make parts for themselves better than people can.
Computers are creating art, helping to cure the sick, improving themselves, and taking care of complex or monotonous tasks. We let them drive us, shop for us, fly us, and treat us. We let them form opinions for us, and let them entertain us. Where do people fit in?
It will replace you (unless you redefine who you are).
In 2013 it was estimated that 47% of job roles in the United States were at risk of being replaced by automated systems (See Rise of the Robots by Martin Ford).
And a lot has happened in 3 years.
While your interactions with AI can make your life easier and far more pleasant, they are designed to achieve something more – every time you do something that can be logged and compared, you are training the AI in human behaviour. But we can’t stop living our lives, so how do we stop the machines taking over?
If we change our point of view, AI can become our best ally and our most powerful tool. To explain properly, let’s take a look at this from the point of view of a web agency like Deeson – take a seat, because this is going to sound a bit like a Hollywood film script.
Each of the steps we take to create a site (discovery, design, testing, build, more testing, etc) follows a series of repeating steps, and as we’ve already learned, AI is very good at doing anything that people see as repetitive. If you can define each step as a model, with a standardised input that gives rise to a required output, you can automate the process.
A real-world example would be the first stage in graphic design, where we put a hierarchy around available content so users can find what they need. In the time it takes for a UX Designer to lay out a wireframe and get client feedback, an AI system could design and iterate millions of layouts, as well as test them against virtual users. Yes, you can potentially see the first 1,999,998 layouts being broken, but with each iteration the solution becomes better. This happens now with a tool called multi-variant testing – AI just increases the scale and the speed.
With content hierarchy taken care of, we can now start graphic design. Designers prepare what are called “Style Tiles”, which are the basics we use to ensure the client’s brand is digitally ready, such as colours, fonts and tone of voice. These are the building blocks of the site. If we were using AI they would, along with the content hierarchy created above, be some of the properties in our input model. With these in place we can repeat the same process as above, iterating as many times as is required until we have a visual design which is on brand (defined by the tiles), while maintaining ease of use (defined as a metric).
With layout, colour, and typography resolved, content is next. This is the easiest problem to solve. AI is already writing content – it’s already able to understand the content requirements of people through the analysis of what we do, where we go on a site, what we say on social media, and through the on-going testing of current site content. If the system knows we are building a site with a focus on coffee, and it knows what people on the web are talking about, it can generate stories and publish them within the design framework created during the previous step. Natural language systems do this already, all the time. The parts of creating a website that humans find the most difficult have already been solved by AI researchers.
Once the content is in place, housed in an on-brand and functionally useful design, which itself sits upon a semantically rich content hierarchy, what’s left? For the majority of websites, the answer is very little. AI systems (which thegrid.io claims to incorporate) are changing the way we think of the web, and web agencies need to take note.
Using AI to design and build a website from nothing but basic guiding principles seems like science fiction, but I know from personal experience that we can do it. My experiment involved the first version of the Science Museum’s Information Age microsite, where algorithms chose the content, designed the layouts, and published the site.
The meaning for the content editor’s job role had changed. Instead of deciding upon a narrative, finding the right objects from the collection, building the associated content to support the narrative, and then writing the final stories, editors become parents to the system. They took on a coaching role. They kept the system fed with objects and content which the algorithms would test live through a series of experiments. Where engagement was high the content remained, where engagement was low it did not. The editors had to reimagine their roles, but they weren’t replaced – rather augmented. They became capable of, in collaboration with machines, writing as many stories as they were capable of dreaming up.
The most important aspect of AI is that we must understand. If we embrace and learn how to use it, we advance ourselves. If we ignore it or bury our heads in the sand, it will overtake us. The future of AI in our industry will be defined by those of us who adopt it, and I sincerely hope you do too.
Credit where it’s due, I stole the headline image from: https://www.linkedin.com/pulse/attack-so-killer-robots-business-hiring-bots-katie-bazemore
Adventures with metal. Before I start fully I’ll briefly explain what I do and where I’m at. I’m not a professional machinist. I don’t know the official terms for things, I don’t know much about rake angles and lubrication methods….
Part one – December 2016 Life with a Leica. I recently did something the majority of people would think is ridiculous. I spent £2,500 on an out of date six year old digital camera. On specifications alone it was a…
The National Archives: A Vision
A quick note to the reader.
This document was written as an appendix for a wider report being presented to The National Archives. It covers my current understanding of digital archives, their importance, and a vision for their future.
Whilst I have previous experience working on digital archives, or with archivists focussing on digital preservation, I’ve stepped slightly outside of current thinking to explore some alternative spaces digital archives could fill. As such this document should be read as one direction to explore, and one assumption to test.
Archives provide a foundation upon which our societies build.
A new idea, one created from nothing, is an illusion. The greatest discoveries our species has made, these celebrated moments, are theatre. Easy to understand dramas for all of us who don’t specialise in that specific subject.
The discovery of the Higgs Boson didn’t happen in an instant.
The realisation that hanging brick walls from a central metal structure would allow us to build higher was not found in a flash of inspiration.
Even mundane discoveries, PVC chairs, were built from discoveries, which built from discoveries, which built from discoveries…
The preservation and sharing of ideas and knowledge has allowed us to move from living on great plains to living in great cities. As we have changed our behaviour from paranoid, closed, and tightly controlled to willing, open, and loosely coupled our shared knowledge has pushed our collective achievements further than anyone could have imagined.
Without the discoveries of previous generations we have nothing to stand upon, and no place to start. Our archives, our libraries, our collective knowledge is our bedrock. The foundation of everything we do, every direction we take, and every success we have.
Preserving and passing on knowledge is a tribal behaviour and each tribe has it’s own methods.
Song, art, chants, dance, etc.
They are all ways to tell a story.
To preserve an essence.
Stories are powerful things. They have allowed us to keep safe the spirit of our cultures through hundreds of decades of rapid change. The original facts may be lost, but the essence carries on. Its impact is the same.
Unlike a physical object which carries a large amount of it’s meaning with it, digital data has none. The way it looks, feels, sounds, and reacts is all brought about not by the data itself but by whatever is representing it.
Data lives a vicarious life, seeing the world through image viewers, audio players, text parsing applications, and a myriad of communication protocols. Without these layers the data is a useless string of zeros and ones. And so they too must be preserved, along with the links between them and their target files, and any additional dependencies they have.
The safer you want your content to be the more layers of preservation are required. As time and technologies pass, the number and complexity of layers grows, and the abstraction increases. An IFF file from 1991 is no longer supported on any available operating system. To access it you must emulate a hardware stack, upon which you can emulate an operating system, at which point you can access the image file. To preserve this file in it’s original state beyond the next decade will require you to either rewrite your hardware and software virtualisation tools, or add an additional layer of virtualisation on top.
The songs become more complicated, require more people to sing the additional parts, and introduce more potential for error.
Preservation is the outcome of a series of decisions you take starting from now. It is not an achievable goal in it’s own right.
The digital archive.
As far as we can tell, digital archives share only one thing in common with analogue archives. They exist to preserve, as best they can, the knowledge they contain and to share that knowledge as widely as possible.
Aside from this they are entirely different.
The content type the archive is preserving is different. The methods of access to that content are different. The capabilities to share that content is different. And the skills required to design, build, and operate a digital archive is different.
A digital archive cannot preserve, in the strictest meaning of the word, it’s content. By it’s very nature the act of accessing or migrating digital data changes it. And as long as these changes are not malicious, this is acceptable.
A digital archive cannot live in one place. It can’t have a set of servers in a room to live on. It can’t be restricted to one database technology. It can’t be restricted to one operating system type. It must be able to live everywhere, as everywhere is where you’ll find useful and relevant data.
And finally a digital archive must be open. Open software, open protocols, open virtualisation, open normalisation algorithms.
A new approach to digital archives.
If there is one lesson we have learned from our almost 20 years in business it is that things change quickly. And the speed of that change is increasing exponentially. Some of our clients have been with us for 8 years, and in that short amount of time they have re-platformed several times. No matter how much we plan for the future we have to accept that it is only our current best guess, and given enough time it will be wrong.
Our current best guess for the future of digital archives is to look at existing systems that already perform well in many of the areas any new platform must. Ironically the hackers and pirates of the digital world have provided us a great model to start the discussion – The Onion Router, Torrents, and Parity Files.
These systems work by spreading data across multiple places (including transmission protocols) with replicas of the data held in thousands of places. Each of these places, or repositories, is housed on an individual’s computer or smart phone. Different operating systems, different hardware, different storage methods, different network access capabilities, and yet they all communicate and share data seamlessly. Each repository is protected, each transmission is encrypted (with unique encryption keys per user), and the job of protecting the data from damage is given to the group, not to an individual system.
Whilst this approach will be unlikely to provide all the answers it gives us great starting point. Petabytes of data, housed across millions of secured repositories, curated by specialists with insight from the group, and protected from malicious damage by the very protocols that perform the transmission. It gives us, at the very least, a new direction to consider.
Archives must be built adapt. To cope with the technical, social, legal, and economic changes that will happen during it’s lifetime.
Ownership of digital materials is a controversial and much debated subject. National and international laws have been, and continue to be, created in an attempt to protect access rights, and so technologies have to be invented to support these laws. The subject of whether DRM encryption should be included in an archived audio file is very interesting, but out of scope of this document. However a digital archive needs to be able to adapt to changes in law, encryption, and rights holder’s wishes.
Build in layers and modules, not in one large lump.
If you build an archive today, some part of that archive will need to be replaced (and archived?) within the next 5-10 years. If you build your new platform from layers and modules you’ll only have to replace a part, not the whole.
Building in modules allows you to add capability over time, to focus on each capability as you build it, and to scale your infrastructure gradually. Each part can be built on the most appropriate technology, by the best technology provider for that part. And as new technologies are created they can be incorporated into the system as a new module, rather than re-architecting everything.
Define a schema for your modules from the outset.
Modular systems can grow in complexity very quickly, and so it is essential to define the sets of schemas and protocols that will pass data between them before any development begins in earnest. Decide upon and document the relationships between, the capabilities of, and the dependencies of each module.
As it is likely that any new archiving platform will be adopted by a wide variety of organisations and institutions, who will each have specific and possibly complex requirements, the base set of capabilities and dependencies should be kept as simple as possible.
Build your system to be technology agnostic. It should be capable of running on any hardware, on any operating system. The more open you make it, the less chance of failure you have and the wider the adoption will be.
Accept that your new system will never be finished. Legal, social, and technological changes will require any new system to be capable of fast and sometimes radical change. The more open you build it, the more capable of change it is, the less likely it is to require a rebuild and migration in the near future.
Build in layers.
Modules allow the distinct capabilities of the system to live alone, with no dependencies upon each other. Layers allow the broader requirements of the system to be separated, creating a more flexible and easily modified stack.
Our assumption is that the four main layers of a digital archive are the repositories, a secure access layer, the catalogues, and the end user access layer.
The repositories are where you keep your stuff. They can be built in any type of database technology. They can be big or small. They can store anything, or specialise in specific formats. And they can be owned by you or third parties. They can be encrypted, or not, and can make use of any available additional security.
The open and flexible nature of the repository types described here is a requirement of any digital archive which is built to adapt to the future. We don’t know what’s coming, what will be deprecated, and what new technologies we’ll have to work with. And so we must design for a repository layer which is incredibly diverse.
Repositories are not linked directly, and do not know of each other’s existence. If one is removed nothing breaks, if another is added the same zero impact outcome is achieved. Repositories can be backed up entirely, or in chunks. Back ups can be static, dynamic, or even other repositories.
Some repositories will be owned and maintained by The National Archives, others by local government bodies, and yet others by public institutions. However with the model we would test repositories that contribute to the archive can be owned by anyone.
Each repository however will need to, either directly or via associated data, store a unique identifier with each discreet piece of data. Other than this no additional data need be stored, such as meta data, in the repository itself.
And finally the repository should be the least capable layer of the system, as it is the most likely layer to be attacked. Securing systems gets you only so far, but remove the ability for a system to destroy itself and an attack can only go so far.
Secure Access Layer
The Secure Access Layer, or SAL, provides a translation and brokerage service between the collections and the repositories. It is this layer that requests for repository access are made and managed through.
The SAL works by taking a requests, checking the authenticity of the request, and if that authenticity is correct returning the requested data to or from the repository. It also acts to translate the returned data from bits to a human readable format – audio files, image files, plain text etc. Where the returned result is a richer data set, say an executable file that requires virtualisation, the SAL will return the required stack as a package.
The SAL works as a barrier to unauthorised access to secured data. An example would be data which is archived but not available for public release for N years.
The SAL is the most capable layer in the stack.
A collection is simply a set of addresses for data within a repository, and the associated metadata, grouped by theme, subject, or other grouping. It is concerned not with data storage, but with information about the digital objects (be they photos, emails, movies, virtual machines, etc). The collection keeps metadata which describes these objects (what, who, when, where, why) and is the layer where access rights are governed – what can be accessed by whom, how it can be accessed, and for what purpose.
A collection is where judgement about the importance and relevance of a file happens.*
A collection can point to multiple SALs, and an SAL can be accessed by multiple collections. Collections can cache and store copies if given permission.**
The collection is where pre-ingest archival information packages and dissemination information packages are created, and is also the point of ingest. During ingest the collection will store key decisions about how the data is to be stored, how it will be normalised, emulated, migrated, etc.
*Judgement can happen at the individual level (the archivist responsible) or at the group level (through cluster analysis of the ontological links). It is therefore possible, and sensible, that collections can be automatically created and machine curated.
**Permissions and content governance is beyond the scope of this document, but we have spent many an evening having heated discussions about it with multiple digital archivists.
End User Access Layer.
The End User Access Layer, or EUAL, is where people access content. These can be secured websites, mobile apps, social media, broadcast radio or television, galleries. Anywhere, using any method, can be considered an EUAL.
The EUAL is authorised to access data from the collection, the collection is authorised access via the SAL.
Linked data and ontologies.
There is one potential additional layer which provides an overarching ontological understanding of the data held in every repository.
This layer, which sits above the SAL, uses the meta data from each collection along to build a dynamic taxonomy of terms used across the entire archive. These terms are not defined by any individual or predetermined group, but by every user of every collection who has the ability to enter meta data.
This layer would also collect the usage analytics from collections, and perform cluster analysis, to determine what end users believe are the most important and relevant relationships between various parts of data, across the entire archive.
By comparing the relationships defined by your users and the taxonomy defined by your collection editors, this layer builds a true ontological understanding of the data within the archive whilst simultaneously containing the context of each piece of data.
It is from these deep links between data that the richer essence is maintained. And as it is derived from social rules it is more relevant for the majority of users. It is opinionated, but open to change, and driven by the group not the individual archivist.
This additional layer provides one answer to the question “What should be keep?”. This layer suggests you keep everything, as one individual cannot say when any item will become relevant again. However the group can, and will, and this ontological layer will capture it.
The National Archives exist in part to promote best practice. To be the exemplars. To provide support. To show the way.
Digital preservation is a fascinating, tough, complex, wonderful mess of a problem. We hope we have shown that we understand the challenges, that we have experience in trying to answer some of the questions, and more importantly that the potential to play our small part in the future of this most exciting projects is one that excites us to a point we can’t express in words.
You are not a moth.
I feel it is time to discuss something that’s been on my mind for a while. I’m uncomfortable with your addiction to the lightbulb.
I don’t think it’s doing you any good. In fact I think it’s going to kill you. I think your going to keep bouncing into it, looking for inspiration or an answer, until your important wing dust burns away and you fall to the floor like an old and dead Pigeon.
You don’t need it.
It’s never going to give you the answers. It’s just a light. And when it burns out it’ll be replaced by an identical one, maybe one with diodes instead of a filament, but it’s still just a bulb.
We’ve been here before. Last time it was that disco ball hanging in that discarded van that all the other Moths were glued to. You bounced your head into that for weeks believing it would turn you into a butterfly. All you got was a broken face.
And then there was that gas boiler pilot light. You were convinced flames were back in vogue. “It’s retro chic” you said. How long did that last?
What’s next? Ahh, a new site to look at. Awwwards. Hmm. Sweet nectar. Lots of new inspiration to steal from.
You remind me of Microsoft. They used to be addicted to their photocopier. It was a crappy old model with a slightly mis-calibrated drum. The copies were never any good. They’d always be slightly bent. They never knew why the designs they’d stolen didn’t work. And they weren’t their designs to begin with so they were never in a position to fix them.
Design Moths are no different.
They hunt around for the next in an endless line of lights to bump up against, hoping that somehow the insight the designer that they’re trying to steal from had will be injected into them through some mystical method. It won’t.
But even if it did, it doesn’t matter. The facsimile was created on Microsoft’s copier. It’s broken. Design Moths will never understand it. The design challenge they are facing is nothing like the design challenge these other people were facing. So why should their idea save their arse now? It won’t.
Design Moths are the worst of all things. An uninformed consumer.
Don’t do it.
You are not a moth.
You are a designer.
A creative. An individual. Somebody who questions. Somebody who is not afraid to fail, because within failure is discovery. Something learned. The best of all things.
This is what you do, it is why you are. If not you are in the wrong job. A designer is not somebody with that word in their title. A designer is a problem solver. A designer will indefatigably question everything until they know why things are. And they’ll not settle on the first answer.
A truly great designer is a real pain in the arse.
They are inspired by everything around them. Sometimes those are things within their direct realm of knowledge but often they are not. They do this because great design is everywhere, not just on internet award sites or places where people show off their latest skeumorphic button set. Tools, furniture, sculpture, architecture, engineering, video games, nature. Our world is full of great design inspiration, why limit yourself to just one medium?
That is lazy and foolish.
Don’t do it.
You are a designer. You don’t blindly follow commands without thinking. You don’t churn a handle and watch clone websites fall into a bucket awaiting dispatch. You can’t do this, every product you work on is their to solve a specific set of problems.
You know this. So why are you Googling ‘website design inspiration’ again?
It’s not easier, it’s not quicker. It doesn’t lead to acceptable work. It is there as a last resort when everything else has failed. When it’s 11am, you have an end of play deadline dropped on you out of nowhere, and your mind goes ___________.
For your own mental health don’t do it. It’s addictive. You’ll waste away chasing the bulb.
Instead do something else.
Close your eyes.
Count to ten.
Grab the team and find a room. Demand more. Ask questions, be a pain in the arse. This isn’t right, you can’t produce design like this. You are not a photocopier.
“I AM NOT A PHOTOCOPIER!”
You need context. You need understanding. Who is the client? What does their business do? You don’t steal a finished design and then post rationalise it. You’re a designer. You rally the team, the entire team, to identify the real problem the client has.
That one crucial issue that needs stamping on.
Is it a content issue? Is it an SEO issue? Is it a political issue? Or is it a technology issue? Or all of them?
Work with the team to solve it. You are not an island. Designers are sponges, you need liquid. The team is your goo. Suck up every piece of knowledge they have, every insight, and every concept.
Identify where other people have solved this issue, and for your own sake don’t limit yourself to the web. A problem spans every discipline, and a web design is NEVER the answer.
Talk to system administrators, psychologists, back end developers, interaction designers, the receptionist, the local Barista. Talk to everyone, they’ll all have a unique input and together you’ll come up with a great first answer.
And then, armed with knowledge and ideas grab your pencils and start scribbling. You don’t need Photoshop, you don’t need Sketch. Your best tool, the only real tool you need, is your capacity to question.
Let the group wisdom flow through you onto the page. Answer the questions, and discover more of them. Challenge everything. Iterate. Never be precious, and never stop questioning.
That is where you belong, amongst a team. Asking questions, finding answers, and designing solutions.
You don’t need the bulb.
You are not a moth.
Don’t worry if nothing works.
When I was 11 I figured out how to modify the control board on my dad’s VCR so that we could use an old fashioned clock to set up when it should start and stop recording. This was my first exposure to electronics and computer systems.
5-ish years later I was poking around a serial modem card and discovered that just like the VCR it could be modified – in this case through a series of audio tones which would allow control from outside of the host PC. This was my first exposure to hacking.
5-ish years later I was poking around a few networks when I discovered a vulnerability which would allow me to take control of network cards in remote systems. I wrote a script and set it loose, not to be malicious, but to see what would happen.
A few hours later I discovered I was now in control of more systems than I was comfortable with. And I was scared. I killed all the files associated with the script, removed the hard drive and my network adaptor, put them all in a canvas bag and smashed the shit out of them with a lump hammer. I then walked a mile down the road and dumped that bag in a random bin. The paranoia lasted for months.
Speak to anyone who started to code from an early age and most of them will have similar stories. These stories happen because computer technology doesn’t work.
When Ed blew the whistle on the CIA and NSA nobody in the security scene was surprised. None of us were bothered either. We’d spent the first bits of our lives looking for and finding countless bugs and holes in technology and now we are developers ourselves we spend our lives building bugs and holes into technology. The spy people aren’t able to get at all our stuff because they are beholders of godly powers – they are able to get at our stuff because we leave all the doors open.
Building the holes.
Here’s the thing – technology gets more complex every time it iterates. Moore’s law should be re-written to say ‘the security within computer systems will half in capability every two years’. Think about what is happening inside your computer for a second. There are thousands of components talking to each other in ways so complex your brain cannot comprehend them. The reason Microsoft got so big so quick is because to write a basic operating system needs hundreds of people building the thousands of bits of stuff that users demand. Take the simple example of sending an email – your computer needs code to make the keyboard function, it needs to code to interpret the keyboard input and store it in memory somewhere, it needs code to make the memory work and the code to let that storage happen. It needs code to make the screen work, and to take the keyboard stuff now in memory and convert it to fonts on the screen. And yes it needs code to make fonts work, code to give the fonts a window, and code to give that window a thing that can accept text. The list of stuff needed goes on and on, I have no idea how many individual chunks of code are needed to work a modern operating system and I’d bet that not one person on this planet could tell me. The complexity of modern software is impossible to grasp so is it any surprise that code is full of holes, bugs, insecurities, and general mess? No.
Code is written by people and people screw things up. Instead of focussing on writing clean and secured code we think about going to the pub or the girl in the next office. We think about the time and how long before we can bugger off or work on something more fun and so we write stuff that’s not very good. And because no piece of code stands alone the bit I wrote that is not very good is reliant on another bit that’s not very good. Layers upon layers of bad code, full of holes that let people in, built to arbitrary deadlines.
Release the rampant wildebeest.
Now take the above and multiply it by about (insert big number here). Welcome to the world. What was bad when constrained to a single PC becomes laughable when you apply it to the modern world. Your computer has to talk to other computers all the time and they all speak different languages and dialects. Your phone, your car, your thermostat, your watch, your lightbulbs. If it’s not possible to know how complex a lone operating system is imagine how impossible it is to understand the entire digital estate of the world. How could this *ever* be secure? It can’t, give up the dream.
You are the problem.
You demand instant access to everything, you demand security on everything, then you demand that you don’t have to remember complex passwords. You bought your last iPhone less than two years ago and yet you demand a new one that is more connected. You demand bio sensors, you demand cross device connectivity, you demand real-time everything, and you demand that it all happens now. More and more developers sat in bigger and bigger companies are pumping out more and more code riddled with more and more bugs so that you can play Ninja Rockstar 9000 on a device with a slightly bigger screen whilst monitoring your glucose levels on your iLust watch. Your demands make my life as a hacker much easier.
I am the problem.
The more you demand the more I build. My deadlines are tight and – let’s be honest here – all I care about is meeting those deadlines so you don’t throw a strop. This means I re-use code that I know is bad and I build on top of code I assume is ok. Deep down I know it’s not ok though, it’ll be broken because everything is. But I don’t worry because I’ll fix these things in ‘Phase 2’ – but then there you go demanding more new stuff.
But is there a problem?
I don’t think so. If your code is running a life support machine you need to make sure it’s capable of not breaking before you go plugging it into someone. That is important. Is making our email more secure than we make a life support machine safe something we should aspire to? No.
Nobody cares about your emails. You are not important.
If you ask anybody who understands technology they’ll tell you that digital security has never, and will never, exist. The existence of Unicorns is more likely than us ever having a secure digital world which is a crazy and dangerous idea. If people believe they are safe they’ll do stupid things – Norton or Kapersky are not going to stop me grabbing your bank account details but they make you feel a bit more secure. Our last best hope is that everyone embraces the lack of security and starts to behave a bit more sensibly when it matters and relaxing a little when it doesn’t.
If you are a developer you owe it to yourself and your client to take the time to write the code as cleanly as possible, to write it knowing that it’s broken, and write it knowing that you or someone else will have to patch it in the future. You are obliged to do this and more, you are responsible for consulting with the client and telling them ’no you cannot have this shiny button until the broken thing is less broken’.
If you are a client you must listen to your development team because they know more than you about digital. You must not promise your managers thing X will be delivered on day Y before the team that are building it have given you that date. If you must deliver something super quick accept that the thing you deliver will be much less than you had in your head. If you keep making your demands, if you keep desiring shiny wonder pillows what you’ll actually be buying is itchy stabby nightmares dressed as wonder pillows.
We must all care when we can make a difference and not care when we can’t.
Digital security is nonsense.