Blonde3http://blonde3.com/rss/2023-02-17T00:00:00ZWerkzeugYou Are Not Latehttp://blonde3.com/writing/2023/230217_you_are_not_late/2023-02-17T00:00:00Z2023-02-17T00:00:00ZNic Hodges<blockquote>
<p>“The trouble with computers is that all they give you is answers.” - Pablo Picasso </p>
</blockquote>
<p>You have no doubt noticed <em>quite a bit</em> of noise about Artificial Intelligence lately, with the recent release of text-based ChatGPT, and image generators like Dall-E, Midjourney and Stable Diffusion in the months prior. </p>
<p>I can’t recall the last time a technology generated this much coverage, both broadly and specifically within the media and marketing sector. There have been so many articles and newsletters and blog posts (mostly with the tired trope of “an AI wrote that paragraph!"), so it’s understandable to think you’re way behind on this potentially monumental shift. </p>
<p>But you’re not late. Realistically, if you’re in an agency, brand or media company that’s implementing some bright shiny new AI-powered tool that got sold in by a breathless salesperson in the last six months, you’re probably being taken for a ride. </p>
<p>We’re still at the beginning of the beginning of this technology (despite the field being almost seventy years old). Nobody knows where it’s going, and it’s a challenge simply to think about what the potential pathways are. </p>
<h3>There is no one simple mental model for thinking about AI.</h3>
<p>Information design pioneer Edward Tufte talks about <em>seeing with your eyes, not your words</em>. What he means is that we often simply look at the world and collapse what we’re seeing down to the simple descriptors of what we already know. A tree, a crowd, a sunset. In doing so we miss the actual substance of what we’re looking at. </p>
<p>With technology in particular this is an easy trap to fall into (I can’t count how many “Uber for X” startup pitches I’ve heard, which is now evolving into “TikTok for X”). And with AI tools booming our default mode for beginning to comprehend what we’re seeing is to look for the familiar comparisons. </p>
<p>One common line is that ChatGPT is like a conversation with an overconfident bloke who’s read all the books in the library but doesn’t actually understand them (or remember them correctly). This isn’t incorrect, but it does beg the question of “what’s the use”? While ChatGPT is a fun and powerful toy, it doesn’t really point to where we might be headed. </p>
<p>Another mental model is that AI tools are like Excel spreadsheets. This requires an understanding of just how much spreadsheet software changed office jobs, but once you get that it sort of makes sense. At some point most desk jobs require some sort of use and knowledge of Excel (or Google Sheets if you work somewhere cool). You get good at it, and it makes your job a bit easier. Some people spend most of their days using it. Some people use spreadsheets in ways that they were absolutely never designed for. In this sense we could look for how general AI tools like ChatGPT and Dall-E are being used in workplaces in order to find the valuable tools to build. And indeed, dozens of startups are being started every day doing exactly this. </p>
<p>But these types of definitions are just mapping our current world onto a new technology. What about truly new mental models? </p>
<p>Jaron Lanier is one of the wise old sages of Silicon Valley. He possesses the unique talent of being immersed in technology for decades yet clearly seeing the negatives and avoiding the hype machine. His view on AI? </p>
<blockquote>
<p>My attitude is that there is no AI. What is called AI is a mystification, behind which there is the reality of a new kind of social collaboration facilitated by computers. A new way to mash up our writing and art. </p>
</blockquote>
<p>I think Lanier has a point here. Without trying to predict the future, we will likely see AI evolve to become a new kind of collaborative technology for both work and play. Sitting behind most AI tools is an enormous pile of training data. AI tools give us the ability to collaborate with literally millions of people who have created that training data, all in an instant. </p>
<p>AI may be the best reminder yet that everything is a remix. (It’s worth noting at this point there are all sorts of ethical and legal issues with that training data – a topic I am absolutely not about to wade into) </p>
<p>One thing I’ve noticed is that the mental models people apply to AI tools are often more a mirror than an insight. Engineers see the world as a series of problems that can be solved through rational optimisation, and so the future of AI is all about optimisation. Creatives are constantly looking for new and unusual inspiration and ways to rapidly communicate their ideas, and so many are already using AI to generate randomness and quickly mock up ideas. Managers are looking to improve productivity, and so look at AI as a way to automate processes or decrease errors. But… </p>
<h3>The AI isn’t coming to take your job.</h3>
<p>Firstly, AI has no agency, so “it” can’t do anything. Your boss might be coming to take your job, but that’s not really a headline that gets the clicks. </p>
<p>The rapid-unemployment-because-AI narrative completely underestimates the speed at which technology shifts from the general to the specific. Despite some big leaps in what’s possible now compared to a few years ago, the technical and cultural implementation of new tools in workplaces moves at a human speed. Your job will likely involve some interaction with AI tools in the next few years. But you already come into contact with AI every day – whether it’s facial recognition systems, audio assistants, or your phone camera. </p>
<p>It’s worth calling out here the one specific thing that ChatGPT is very good at – generating passable but often incorrect slabs of text about known topics. So if your job is writing for a content farm, I’m sorry but AI is coming to take your job. Really fast. (This will also cause a huge problem for Google in the forever battle with content farms, but more on that later) </p>
<p>Another common narrative is that AI will make anyone who works in commercial creativity redundant. Outsiders to the creative sector often see Dall-E and ChatGPT and figure that AI will wipe out all those hipsters in t-shirts riding skateboards around the “office”. But the words and pictures produced by creatives are not the work. The work is a long and often weird process, of which the words and pictures are a small final output. Creatives are safe for a while, and many I’ve talked with are already using AI tools for creative nudges and rapid prototyping. </p>
<p>There’s also a common trope that AI will create a whole new range of job descriptions that we’ve never imagined before. Which is great fodder for futurists but realistically doesn’t mean much. AI will get integrated into tools and technology, and we will use those tools and technologies. To go back to the Excel comparison, spreadsheets didn’t spawn “formula writer” as a job description. If spreadsheets helped you do your job better you just learnt how to use them. </p>
<h3>There’s more blind spots than you might think.</h3>
<p>There is a strong selection bias to the AI use cases making headlines right now. And this points to one of the biggest blind spots in getting from “fun demo” to “useful tools”. For every amazing thing that an AI tool has created, there are thousands of failed efforts – nonsense text and six-fingered hands. And humans had to wade through all those failures to find the one winner that makes the headline. </p>
<p>That’s not to say things won’t get better. But they will get better in quite specific directions. You may recall the buzz a few years back when DeepMind’s AlphaGo beat the world’s best Go player. One reason this was possible is that games like Go can be scored. As a result AlphaGo was programmed to play games against itself, collecting data on what patterns resulted in a win. But when it comes to image generation like Dall-E, how do you score the output? Even though ChatGPT has a thumbs-up or thumbs-down feedback mechanism, it still requires human feedback. So while certain areas of AI tooling will see rapid development (tasks where a computer can measure and score the quality of output), other areas will be remarkably slow (tasks where humans need to give feedback or scoring is highly subjective). </p>
<p>Improvements won’t come at the pace we expect. Our expectations of technological progress are based on the last few decades of hypergrowth – the shift from a blinking cursor to hyper-realistic video games, from a Nokia 8210 to an iPhone. The drivers of that growth (primarily Moore’s Law but also global-scale manufacturing) aren’t super relevant to AI tools. An order-of-magnitude improvement would be nice in the next five years, but it’s highly unlikely. </p>
<p>Cost is another blind spot. Specialising these tools for a specific sector (or even company) is a tempting leap to make. But that is a huge investment that very very few companies could afford to make, particularly without a clear path to how it would have a positive financial impact. According to OpenAI’s Sam Altman, ChatGPT costs a few cents per query. And that’s after training the GPT-3 large language model, which has been estimated to have cost between $4.6million and $12million (Open AI hasn’t confirmed either way). </p>
<h3>What’s The Next Thing?</h3>
<p>After several years of interesting AI tools being publicly released, ChatGPT seems to be the one that has captured the public imagination. One interesting aspect of ChatGPT is that it’s actually not that new. It’s been around in basically the current form for several years (the GPT-3 language model was released in June 2020), but wasn’t publicly accessible via such a simple web-based interface. </p>
<p>Given ChatGPT’s rapid rise to tech stardom, we’re already seeing the other big tech companies prepare to roll out their own versions. Google likely has way cooler AI tools ready to go, but has been held back by (probably very reasonable) ethical concerns. By the time you’re reading this it’s likely they (and many other tech firms) have already rolled out public AI tools, putting those ethical issues aside. We’re at the beginning of a flood of general purpose tools and demos that will bring years of breathless headlines. And then we’ll begin to see the innovation. </p>
<p>It’s not until these general purpose AI tools become widely programmable that this real innovation will happen. And it’s not obvious what that innovation will be. When Google Maps launched it wasn’t obvious that we would end up with Uber. When the iPhone was released it wasn’t obvious that we would end up with Instagram. As AI tools move out of large (and well-funded) big-tech labs to become more accessible for new and interesting applications, the Darwinian process of use cases will begin. Trying to crystal ball what these use cases are is a fools errand. To use well worn line – when the Model T Ford was launched, nobody predicted that we would end up with Walmart. </p>
<p>One thing seems certain: AI is the bright shiny object in the world of media and brands for the foreseeable future. This is not to say that it’s not going to add value any time soon though. And it’s going to be challenging to find that value amongst the noise. </p>
<h3>What’s next for brands?</h3>
<p>At risk of crystal-balling, if we think of AI tools as a collaboration with a massive number of people through a huge dataset, it’s possible to start thinking about some applications for brands and agencies. It’s worth noting some of these ideas are already being worked on, and I’m not going to point to specific tools as the whole space is moving a bit too quick for that. </p>
<p>Customer data is one area that could get interesting. The ability to take huge amounts of data from customer feedback (both text and calls transcribed by AI tools) and find unseen patterns and emergent trends is well suited to the way AI is working right now. It’s no silver bullet, and the danger would be thinking a tool like this could deliver insights that result in company-defining change. But it’s easy to see how this is a one-percenter that is bolted into a platform like Salesforce pretty soon. </p>
<p>Similarly the ability for AI text tools to not feel like rigid scripted response bots could be a big leap for customer support. We get into the murky area of support chat bots pretending to be human at this point, but I actually doubt people care if they’re chatting with a computer as long as it’s not frustrating and obviously scripted. Again, this is a problem well suited to current AI methods. </p>
<p>It’s tempting to think media planning is ripe for AI-fuelled disruption. Feed in thousands of media plans and results, and then just ask an AI tool to build you a response to a brief. But there’s both an input and output problem to overcome to achieve this. On the input side, media plans are inconsistently structured. And while the ingestion of unstructured data is certainly easier these days, the ability to extract consistent and accurate inputs from years worth of media plans is just not there yet. Additionally, the business outcomes and objectives for any plan are usually in another document somewhere or (more likely) lost to time. </p>
<p>On the output side media planning highlights perhaps the largest deficiency in AI tools: zero awareness of context or culture. The bulk of work that any agency in marketing and communications does is deeply reliant on understanding not just objectives and audience, but the culture in which that brand and audience exist. While AI tools could make some of the repetitive bits of this process faster, that human awareness will continue to be important. </p>
<p>Similarly for creative agencies, the cultural shortcomings of AI tools in both text and images make them unsuitable for creating fully-formed ready-to-roll ideas. But both upstream and downstream there is already a plethora of tools emerging to make creative work better. Upstream, the use of AI tools for idea generation and exploration is well underway. Even basic use of ChatGPT with early ideas can help uncover new and interesting directions. Again it’s not a silver bullet, but it’s another tool in toolbox for creatives. Downstream, in the production of creative, AI tools are also already prevalent – to the point that many people editing video or audio, shooting and retouching photos or illustrations don’t even think about them as AI anymore. They’re just tools. We’ll likely see an acceleration in some of these use cases, but similar to AI in creative generation, they just add to the toolbox. (I’ve consciously avoided talking about creative-optimisation tools that churn out thousands of different ads to run on Instagram etc. While they do exist and probably work fine for driving sales, I’m not sure they’re relevant for marketers and agencies working on brands with long-term goals) </p>
<h3>What’s next for media companies?</h3>
<p>The internet drove the cost of media distribution down to essentially zero. Two decades in and it’s not controversial to say that this has caused quite a few challenges. And now, AI tools will bring the cost of media creation down to essentially zero. It’s hard to imagine how this won’t go very bad. </p>
<p>But first, the good bits. Similar to creative production, AI tools are already embedded into a lot of the technology used in media companies, whether writing tools or homepage optimisation or unearthing breaking news in the sea of social media. This should be a good thing, and newsrooms will be like so many other workplaces where AI tools keep making things 1% better, while highlighting the tasks that humans are uniquely suited for. </p>
<p>The downside will be a flood of content that doesn’t require any real insight or original research. The original sin of the digital media era is the reliance on advertising, which means a reliance on page views (and in turn, a reliance on traffic from social networks). This reality has resulted in both content farms (sites churning out low quality content aimed purely at answering search queries) and clickbait. Content creation for both has been outsourced to the lowest bidder, and can now be outsourced to AI tools for a fraction of the cost and time. </p>
<p>One upside of this incoming flood may be that trusted media brands become more valuable. However we’ve heard that argument before with social platforms, and the reality was arguably the complete opposite. While it’s obvious that something is wrong when an AI-generated photograph has a six-fingered hand, it’s harder to spot that sixth finger in text. When it comes to most media, good enough will satisfy most people. And it’s never been easier or cheaper to create content that’s good enough. </p>
<p>So yes, the next few years are going to be weird and bumpy for media companies. But to make matters worse their survival is very much in the hands of big tech. Just this year we’ve already seen every big tech company scramble to get AI tools into market. But behind the scenes there is just as much work going on to stop the incoming flood of AI content, particularly at Google (and YouTube), Microsoft, Baidu and Amazon. These companies are responsible for billions of searches every day, and now need to understand how to deal with AI-generated content when looking for answers. The survival of media companies will likely be an afterthought when big tech’s stock price is under threat. </p>
<h3>The party is kicking off.</h3>
<p>It’s clear that AI tools are going to have a huge impact on the next few decades. There is a good chance the broader AI sector will be <em>the</em> defining technology of the next twenty years, taking over from the smartphone (and the internet before that, and the PC before that). </p>
<p>What form that impact takes is unknowable right now. But it’s worth paying attention to the ideas and experiments that are happening. One huge benefit of where we are with technological innovation today is that quite a lot of it happens in public. </p>
<p>You can start playing around with tools like Dall-E and ChatGPT right now. And through playing with them you’ll start to understand potential use cases. And any of those use cases – from music generation to realistic voice synthesis to automatic video editing to data visualisation – are probably already projects that are in development. And some of those projects will become central to your job and your business over the next few years. </p>
<p>You’re not late to the AI party. But it’s definitely kicking off. </p>
<p class='date'>- February 2023</p>Heres How Much Youre Worthhttp://blonde3.com/writing/2018/180806_heres_how_much_youre_worth/2018-08-06T00:00:00Z2018-08-06T00:00:00ZNic Hodges<p><em>This article originally appeared on <a href="https://www.crikey.com.au/2018/07/09/heres-how-much-youre-worth-to-an-advertiser/">Crikey</a> as part of their <a href="https://www.crikey.com.au/prying-eyes/">Prying Eyes</a> series.</em></p>
<p>A digital user is “like a data factory,” <a href="http://www.roughtype.com/?p=8394">says US technology writer Nicholas Carr</a>.</p>
<p>“When I drive or walk from one place to another, I produce locational data,” he explains. “When I buy something, I produce purchase data. When I text with someone, I produce affiliation data. When I read or watch something online, I produce preference data. When I upload a photo, I produce not only behavioral data but data that is itself a product.”</p>
<p>The revelations earlier this year that political consulting firm Cambridge Analytica had acquired and used data from tens of millions of Facebook accounts shone a spotlight on just how much personal information is being collected and stored by technology companies.</p>
<p>But while media coverage of the scandal understandably focused on how that data was used for political purposes, less attention was given to the question of why it’s collected in the first place, and how much it’s worth to the companies that collect it.</p>
<p>Google, Facebook and Amazon like to say they collect your data to make their services better — for example, by displaying search results that are geographically relevant, making friend recommendations you appreciate, or suggestions for books you might like.</p>
<p>That’s not untrue, but they scoop up billions of pieces of data for a much bigger reason: to sell to advertisers. </p>
<p>Personal data has always been valuable to advertisers. It’s the reason why loyalty cards exist — so advertisers can paint a picture with numbers about who is buying what, create better ads, and put those ads in the right places.
Similarly, credit card companies sell data about your purchasing habits to third parties for the same purpose (the common trope is that your credit card company knows you’re going to get divorced before you do).</p>
<p>What’s different in the age of smart phones and social media is the scale and precision of the data being collected. A supermarket or airline might have an idea of your household income through their loyalty data, but LinkedIn knows your employer and job title. Your credit card company may think you’re likely to purchase a luxury car some time in the next year, but Google knows you just searched for your nearest BMW dealer.</p>
<p>This trove of personal information is shaped by your search history, photos, social media posts, check-ins, emails, and by tracking you across almost every website you visit. It is almost impossible to avoid leaving this trail of data breadcrumbs.</p>
<p>As computer security expert Bruce Schneier writes in his book Data & Goliath: “Surveillance became the business model of the Internet because it was the easiest thing that made money and there were no rules regulating it.” </p>
<p>And that business model is big business. In 2017, Google sold $127 billion in advertising globally – or $35 for every single person on the internet. In the same year, Facebook sold $53 billion of ads. In the last 12 months, Facebook has earned on average $28 per user globally, and while it doesn’t reveal how much it makes from advertising per Australian user, in North America that number is $119.</p>
<p>Comparing these figures to traditional advertising revenues reveals just how valuable personal information can be. News websites also sell advertising, but do so based on broader demographic data rather than personal data.</p>
<p>A publication like SmartCompany (Crikey’s small business-focused sibling), for instance, charges advertisers around 5c to show you an ad (in the media industry digital ads are sold in thousands, referred to as the CPM — or Cost Per Thousand — so 5c per ad becomes a CPM of $50). To be as valuable to SmartCompany as you are to Facebook, you would need to view around 2400 articles per year.</p>
<p>But some types of personal information are worth much more than others. Because Google and Facebook both have tools allowing anyone to create hyper-targeted ad campaigns, it’s possible to pinpoint exactly what different pieces of personal information are worth.</p>
<p>Google has a significant advantage over Facebook when it comes to advertising, because while Facebook may know every detail of your life, Google knows what products and services you’re looking for right now. Google lets advertisers bid for the right to show you an ad based on your search queries. The cost of that winning bid varies widely, but the basic rule is that the more valuable the potential sale, the higher the cost. </p>
<p>If you’re searching for a wedding venue nearby, for instance, an advertiser will pay around $1.50 if you click on their ad (Google sells their search ads based on clicks, but the AdWords tool quotes a CPM of around $250). Looking for a dentist in your area? Google will pocket about $5 if you click an ad in those search results, for a CPM of almost $1200. But financial products see ad costs skyrocket – if you search for “Mortgage Broker” and click an ad, it will cost the advertiser around $9 per click. For “Car Insurance Quote” it’s closer to $11, or a CPM (cost per thousand) of over $5000.</p>
<p>While Google benefits from your online searches, Facebook benefits by collecting as much personal information about you as it can. Facebook’s Ad Manager tool tells advertisers how many people in its network fit a particular description, and how much it will cost to show them an ad. Facebook doesn’t necessarily make more money from promoting mortgages than toothbrushes, though, because there are so many more opportunities to serve up ads in the 82 minutes the average Australian spends scrolling through Facebook every day.
And the smaller an audience, the more desirable it is for an advertiser to reach, and the more valuable it is for Facebook.</p>
<p>Home owners? There are 5.5 million of you in Australia on Facebook, and an advertiser can get an ad in front of you for a CPM of $8.49. A smaller number of Australians on Facebook have an income over $150,000 (1.8 million), and advertisers will pay a Facebook CPM of $10.38 to show you their wares.</p>
<p>Thinking about buying a new family car? You’re one of 720,000, and advertisers will pay a CPM of $12 to reach you. And congratulations to the 35,000 Australians who got engaged in the last three months: you have advertisers paying $0.037c to show you an ad – earning Facebook a $37 CPM.</p>
<p>“Data sounds very scientific, impersonal and hygienic,” says iconoclastic US advertising writer <a href="https://adcontrarian.blogspot.com/2017/10/when-data-is-dangerous.html">Bob Hoffman at The Ad Contrarian</a>, “but it is not.”</p>
<p>To marketers, writes Hoffman, data is not all numbers and algorithms — “it is your sexual preferences, your religious beliefs or lack thereof, your banking details, your medical and psychological diagnoses, your work history and political preferences. It is thousands of facts about you that you never suspected anyone knew or collected.”</p>
<p class='date'>- August 2018</p>Final Thoughts on Facebookhttp://blonde3.com/writing/2018/180413_final_thoughts_on_facebook/2018-04-13T00:00:00Z2018-04-13T00:00:00ZNic Hodges<p>I’ve been getting a lot of questions around Facebook, data access, and digital marketing strategies over the past few weeks. Last night I was on ABC 7.30 talking briefly about how the Facebook user data exposed in the CA breach might be used for political purposes. (The segment is available to stream <a href="http://www.abc.net.au/7.30/how-much-data-does-facebook-have-on-you/9642832">here</a>, sorry if it’s geoblocked in your country)</p>
<p>Over these phone calls and emails and interviews I’ve ended up with a few notes that I thought worth posting. There’s been a lot of chat about this topic, but it seems to be reaching the end of its news-cycle, so here’s my summation and (hopefully) final thoughts…</p>
<h2>Why has CA been the straw that broke the back of this issue?</h2>
<p>So there’s an argument that Trump and Brexit - and the idea that the CA data influenced those outcomes - are the reason people care. But I don’t think that’s it. Trump and Brexit suddenly gave regular non-technical people an understanding of how these tools work - a simple example with which to realise the power of this data. And that’s what was really needed in order for concerns to be raised beyond the tech-world. </p>
<p>This has all happened in the context of a great social and mental health question around these platforms. For Facebook, Twitter and Instagram, the last few years has seen a steady flow of studies suggesting that there are some considerable downsides to our use of these tools.</p>
<h2>It’s not new</h2>
<p>The well polished sales pitch from Facebook to advertisers was always about deep understanding - Facebook knows consumers better than consumers know themselves.</p>
<p>So the concern many people in tech have held for a while was that if it’s possible to get to this one-to-one targeting to increase widget sales by 1%, it’s possible to do some pretty dark things as well. The only thing stopping that is the security of the data and the limits of the tools that Google, Facebook, etc. build. (Jaron Lanier, Bruce Schneier, and Douglas Rushkoff are just three people who have been vocal on this issue for over two decades)</p>
<p>Obama’s campaign was held up as a best-practice case study in digital marketing. Is that still true? If so, where was the line was crossed? Once you have algorithms that can tailor an ad so that it may only ever be seen by one person and then never again, is that it? Or is it that the data was extracted from Facebook against both its terms and user expectations? Or was it simply that it was foreign actors? The answer to this is probably the biggest omission from the media coverage on this topic so far.</p>
<h2>It’s a broad stroke - but developers and middle management in tech companies don’t understand externalities well</h2>
<p>Developers are often perceived as lacking empathy and social skills - and this is often used to explain missteps. But I don’t think that’s true, not for the majority at least. The real explanation is probably closer to a misunderstanding of externalities - the second and third-order effects of what they build.</p>
<p>This is especially true when you consider the definitive optimism that fuels places like Silicon Valley. There is an unquestioned belief that technology will change the world for the better. This belief is essentially mandatory for the innovation that <em>does</em> create huge change, so don’t hate on it.</p>
<h2>This isn’t good news for small business</h2>
<p>Facebook was (and still is) an excellent platform for small businesses to build a customer base. So as long as Facebook can convince users to stay, hopefully this ability remains.</p>
<h2>Regulation is tricky</h2>
<p>GDPR is a good approach, and prioritises individuals over corporations and legalese. It’s been interesting to see initial criticism from some in the US tech world when it was introduced shifting to a feeling that perhaps it’s not a bad solution.</p>
<p>The focus of any regulation should be to make it a liability to be holding personal data about anyone. At the moment that data is considered an asset.
The biggest danger of knee-jerk regulation right now is around stifling innovation. If it becomes too expensive or risky to hold customer data, existing large organisations will have a strong(er) competitive moat.</p>
<h2>What about Google? (etc.)</h2>
<p>I have never seen Google expose personal data to third parties in the way Facebook does. If there is a data breach of Google that exposes almost everything they know about tens of millions of users, you can be sure that the response will be an order of magnitude larger than Facebook has faced. But that hasn’t happened, and it’s unlikely it will.</p>
<p>It’s important to emphasise the timeframes here - in 2013 I was working on a data analytics tool using Facebook’s API, and was astounded at the level of data we could see (and this was standard developer API access). That was 5 years ago. Since way back then Facebook has made choices about what data they felt was fine to share with third parties, with minimal oversight. They did so despite constant breaches of policy, and with full knowledge of how that exposed data could be (and was being) used.</p>
<p class='date'>- April 2018</p>The Next Tobaccohttp://blonde3.com/writing/2018/180319_the_next_tobacco/2018-03-19T00:00:00Z2018-03-19T00:00:00ZNic Hodges<p><em>This article was also published on <a href="https://mumbrella.com.au/social-is-the-next-tobacco-is-an-extreme-line-but-a-fitting-one-505709">Mumbrella</a></em></p>
<p>An interesting few weeks in the world of social-not-media platforms. I’ve done a few talks recently where I touch on the idea of social being “the next tobacco”. This idea seems to get a lot of traction, so I thought it worth expanding on. </p>
<p>But first, a recap…</p>
<ul>
<li>The <a href="http://www.bbc.com/news/technology-43385677">UN reported on Facebooks role</a> in allowing groups and content that incited violence against the Rohingya in Myanmar. One of the authors reporting that “I’m afraid that Facebook has now turned into a beast”</li>
<li>Facebook was <a href="https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election?CMP=share_btn_tw">reported</a> to (and then <a href="https://newsroom.fb.com/news/2018/03/suspending-cambridge-analytica/">revealed)</a> that Cambridge Analytica and its partners were allowed to misuse the platform to access personal information (under the guise of “research") of ~50M users. This data was then used in political advertising, giving one side an unfair advantage.</li>
<li>On the above, Facebook have known about this breach for 2 years, and <a href="https://twitter.com/carolecadwalla/status/974995682124804099">tried to sue</a> to stop this becoming public. They also <a href="https://twitter.com/chrisinsilico/status/975335430043389952">suspended</a> the account of the source that revealed the breach.</li>
<li>Ongoing government enquiries in to how social platforms have been used for political advertising are now generating such insane responses that it seems the media has stopped even trying to report them all. <a href="https://twitter.com/nowthisnews/status/963050815639379968?lang=en">This from a UK MP is just amazing</a>. (<a href="https://www.theregister.co.uk/2018/02/09/google_facebook_twitter_grilled_over_fake_news_algorithms/">here’s</a> the full story given that Tweet lacks context)</li>
<li>A security researcher spent some spare time <a href="https://labsblog.f-secure.com/2018/03/16/marketing-dirty-tinder-on-twitter/">hunting down Twitter bots</a>. He found a huge network, easily identifiable as bots, with a few scripts and a couple hours work. He even <a href="https://github.com/r0zetta/pronbot_search/tree/master/results">published</a> the list of bots. Meanwhile, social platforms with huge resources (and huge advertising revenues) claim that finding bots is very very hard, and will take a long long time.</li>
<li>YouTube just cannot clean up the platform. YouTube Kids (a “world of learning and fun”) is <a href="https://www.businessinsider.com.au/youtube-suggested-conspiracy-videos-to-children-using-its-kids-app-2018-3?r=US&IR=T">recommending conspiracy videos</a> - from flat-earthers to faked moon landings to lizard people. (I know you think this is old news - it’s not! This has only been revealed today!)</li>
<li>Facebooks auto-complete search algorithm was <a href="http://nymag.com/selectall/2018/03/facebook-videos-of-search-auto-filled-to-child-porn-videos.html">suggesting child-porn videos</a> if you started a search for “video of”.</li>
</ul>
<p>The line that these platforms are bad for individuals has been well worn over the past few years. It’s a luddite’s game, and the reasonable response has always been that it’s the individuals choice.</p>
<p>What is now happening is that the missteps, overreaches, and systemic failure by these platforms have very clear and very real externalities.</p>
<p>It wasn’t the harm to the individual that caused a shift in attitudes to tobacco. It was when the externalities became clear - second-hand smoke and public health costs - that attitudes and regulation changed.</p>
<p>“Social is the next tobacco” is an extreme line. But it’s also one that Facebook, Twitter, and YouTube appear to be approaching at high speed. If there’s not a shift in attitude and action, they may become no-go zones for brands. Marketers may soon need to ask themselves if they would advertise on a pack of cigarettes.</p>
<p class='date'>- March 2018</p>18 Things Part 3http://blonde3.com/writing/2017/171208_18_things_part_3/2017-12-08T00:00:00Z2017-12-08T00:00:00ZNic Hodges<p><em>18 Things is a slightly different take on the usual “Annual Predictions” posts. Over the last few weeks I’ve posted 18 things I believe might happen in 2018, with a brief overview of why. Each prediction has a confidence level (an idea nicked from <a href="http://slatestarcodex.com/2017/01/06/predictions-for-2017/">Scott Alexander</a>), so I can come back at the end of the year and see how I went. I’ve mostly stuck to my circle of competence, but please don’t read in to anything too much, and definitely don’t go betting your house on any of these…</em></p>
<hr />
<h2>13: Fake news will continue to be an issue as long as we try to solve it with technology</h2>
<p>Prior to November 2016, <a href="https://trends.google.com/trends/explore?date=today%205-y&q=%22fake%20news%22">fake news wasn’t a thing</a>. We’re now 13 months in to this new post-truth world, and the genie can’t be put back in the bottle. Despite government enquiries and ongoing media attention, <strong>no progress will be made on countering fake news as long as we try to solve it with technology</strong>.</p>
<p>Fake news is the logical extreme of <a href="https://en.wikipedia.org/wiki/Filter_bubble">filter bubbles</a>. It’s related to several other meta-trends happening in technology right now - the <a href="https://www.poynter.org/news/do-facebook-and-google-have-control-their-algorithms-anymore-sobering-assessment-and-warning">opaqueness of algorithms</a>, the growth of AI, and the dominance of walled gardens. It’s also a reminder that while the internet has enabled a lot of amazing things to scale, bad things scale too.</p>
<p>Solving fake news is something that needs to happen. However solving it with more technology probably isn’t the solution, because it’s not a technology problem. Tech legend (and founder of <a href="https://www.eff.org">EFF</a>) <a href="https://en.wikipedia.org/wiki/Mitch_Kapor">Mitch Kapor</a> <a href="http://hci.stanford.edu/publications/bds/1-kapor.html">puts it well</a>, “One of the main reasons most computer software is so abysmal is that it’s not designed at all, but merely engineered”. We engineered our way in to the filter bubbles, but that doesn’t mean we can engineer our way out.</p>
<p>Confidence: 80%</p>
<hr />
<h2>14: Online retail continues the march offline</h2>
<p>You can’t talk about online retail in western markets without focusing 90% of your time on Amazon. And if you focus on Amazon you’ll notice that online retail is increasingly becoming about offline retail. So in 2018 <strong>expect to see Amazon acquire another large offline retailer (likely in the UK) for over $4B.</strong></p>
<p>If you spend too much time reading about Amazon, you might be surprised to learn that <a href="https://www.census.gov/retail/mrts/www/data/pdf/ec_current.pdf">online retailing still only accounts for 8.4% of overall retail spend</a> in the US. Physical retail is still a huge opportunity, and so it wasn’t a huge shock in 2015 when Amazon announced the opening of a physical bookstore. In 2017 Amazon came close to <a href="https://qz.com/943870/amazon-amzn-will-replace-nearly-every-bookstore-barnes-noble-bks-closes-in-2017/">opening more bookstores</a> than Barnes & Noble closed. </p>
<p>If you’re in retail the problem is not that you’re being disrupted, it’s that your industry is transforming without you. It’s not about technology, it’s about customer centricity. One challenge with retail is that it’s a slow process, a problem Amazon solved this year by <a href="https://www.nytimes.com/2017/06/16/business/dealbook/amazon-whole-foods.html">buying WholeFoods</a> for $13.4B. It would be surprising for them to stop the acquisition strategy with just that one purchase.</p>
<p>Confidence: 70%</p>
<hr />
<h2>15: Tech is the new junk-food, founders are the new bankers</h2>
<p>As many people and businesses shift away from the near-religious worship of “disruption”, <strong>2018 will see an increase in public concern about the affect technology is having on society, and on the benevolence of tech’s billionaire founders</strong>.</p>
<p>The argument that technology is bad for society is not a new one. In the current digital revolution people like <a href="https://www.amazon.com/Glass-Cage-How-Computers-Changing/dp/0393351637/ref=sr_1_1?ie=UTF8&qid=1511749290&sr=8-1&keywords=the+glass+cage">Nicholas Carr</a> and <a href="https://www.amazon.com/You-Are-Gadget-Jaron-Lanier-ebook/dp/B002ZFXUBO/ref=sr_1_1?ie=UTF8&qid=1511749284&sr=8-1&keywords=you+are+not+a+gadget">Jaron Lanier</a> have been sounding the alarm for years. Nor is the argument unique to <em>this</em> revolution. <a href="https://www.huffingtonpost.com/hector-l-carral/stop-saying-technology-is-causing-social-isolation_b_8425688.html">Newspapers, radio, and TV</a> were all going to bring about the downfall of society.</p>
<p>But underneath the easily dismissed Luddism there are valid arguments. The scale and speed with which negative outcomes and externalities can occur is now global and instant, a unique feature of this revolution. Add to this the revenue growth expectations of what are now some of the world’s largest companies, and there are valid criticisms of big-tech.</p>
<p>Despite the self-driving cars, AI, and VR wizardry, Google and Facebook are advertising companies. And while Amazon is not primarily an ad company, it’s incentives are not far off Google and Facebook - get people to buy things. Despite sloganism around “connecting communities” and “do no evil”, the primary concern of these companies is increasing advertising revenues, seemingly at any cost. As public companies (albeit with private control), Wall Street actually has every right to demand the delivery of more profit. </p>
<p>But many of those who have delivered that outcome over the past few years are <a href="https://www.theguardian.com/technology/2017/nov/08/ashamed-to-work-in-silicon-valley-how-techies-became-the-new-bankers">now questioning</a> whether it was worthwhile, and the impact their work is having on the world.</p>
<p>None of this is helped by the activities of Mark Zuckerberg and Travis Kalanick over the past 12 months. Zuckerberg’s <a href="https://www.recode.net/2017/8/5/16100032/mark-zuckerberg-issues-president-facebook">bizarre not-running-for-president-seriously</a> presidential tour of the US was topped off with a <a href="">tone-deaf VR tour</a> of flooded Haiti, complete with hi-fives. Meanwhile Kalanick’s leadership of Uber (too many failures to list really) has many questioning whether all this regulation stuff might actually serve a purpose after all. To quote <a href="https://www.youtube.com/watch?v=OSSQrZQ-Jps">Scott Galloway</a>, “In the 20th century big tech saved the world, in the 21st it’s f<em>*</em>ing it up”.</p>
<p>Confidence level: 90% (definitely one of the more subjective predictions though)</p>
<hr />
<h2>16: Bad stuff scales too</h2>
<p>The internet has allowed a lot of good things to scale. Sadly though, whenever good things are created people will work out how to use them for bad things. The bad things come in many forms and only make headlines when they’re easily explainable, like Russians meddling in elections via social media. But the biggest threat for most people right now is the amount of personal data being collected and stored online. And 2018 will likely see <strong>the biggest breach and dump of personal data ever</strong>.</p>
<p>This decade has been dominated by an <a href="https://trends.google.com/trends/explore?date=2010-01-01%202017-12-06&q=%22big%20data%22,%22ad%20blocker%22">obsession with big data</a> - collecting as much data about customers and users and businesses as possible. The result is mountains of data, usually stored with terrible security.</p>
<p>While a big data breach from a brand will no doubt affect that brand directly, it will also impact consumer behaviour and privacy awareness. Over the next few years, data will shift from being an asset to being a liability. The irony is that the mountains of data probably <a href="http://adage.com/article/digital/p-g-decided-facebook-ad-targeting-worth-money/305390/">don’t make marketing any more effective</a> anyway.</p>
<p>Confidence: 80%</p>
<hr />
<h2>17: Government proposals on algorithmic transparency laws</h2>
<p>Algorithms increasingly govern our lives, often in ways we can’t quite see. There’s a growing concern about the impact of these algorithms, and a growing desire to be able to see and understand them. So in 2018 <strong>expect to see initial proposals (likely from the EU) on the introduction of algorithmic transparency laws.</strong></p>
<p>Only a few years ago, <a href="https://en.wikipedia.org/wiki/Trolley_problem">trolley problems</a> were a niche topic amongst ethicists, economists, and philosophers. In 2017, they’re the topic of <a href="https://www.theatlantic.com/technology/archive/2017/06/how-do-buddhist-monks-think-about-the-trolley-problem/532092/">Atlantic think-pieces</a>, and <a href="https://www.newyorker.com/cartoon/a20896">New Yorker cartoons</a>. Trolley problem discussions represent our increasing unease about decisions made by technology that are out of our hands, and now out of our view.</p>
<p>It’s human nature to imagine how things go wrong. But it’s not the self-driving cars that we should be concerned about. Algorithms are silently and invisibly defining the <a href="http://journals.sagepub.com/doi/abs/10.1177/0263276417722391">media we consume</a>, the <a href="https://www.forbes.com/sites/paulhsieh/2017/04/30/ai-in-medicine-rise-of-the-machines/#649faf39abb0">medical treatment</a> we receive, the <a href="https://www.smithsonianmag.com/innovation/artificial-intelligence-key-personalized-education-180963172/">education we get</a>, and the <a href="https://www.wired.com/2017/04/courts-using-ai-sentence-criminals-must-stop-now/">time we spend in prison</a> (if that’s your thing).</p>
<p>Many believe we’ve already gone too far, and the time for some type of accountability and transparency for tech-companies was yesterday. <a href="https://www.ajlunited.org">The Algorithmic Justice League</a> is fighting algo-bias today, author Steve Sammartino thinks we should <a href="http://www.abc.net.au/radio/programs/worldtoday/thisweek-intech/8941428">label algorithms like we label food</a>, and tech-godfather Tim O’Reilly <a href="https://www.oreilly.com/ideas/our-skynet-moment">believes</a> “we’ve already had our Skynet moment”. Personally I’m with them, this is one prediction that needs to come true.</p>
<p>Confidence: 70%</p>
<hr />
<h2>18: The walled gardens win</h2>
<p>The web is now well in to its twenties. When it was born it was open, free, and collaborative. The last two decades have been a process of closing and commercialising, the building of walled gardens. While there’s a nostalgic drive to “<a href="https://staltz.com/the-web-began-dying-in-2014-heres-how.html">save the web</a>”, the walled gardens have won. In 2018 <strong>we’ll likely see Google and Facebook capture more than 100% of digital advertising growth globally, and Facebook representing more than a quarter of time spent online by Australians</strong> (<a href="http://digitalmeasurement.nielsen.com/digitalmedialandscape/surfing_report.html">currently sitting at 21%)</a>.</p>
<p>In 1996 John Perry Barlow wrote his <a href="https://www.eff.org/cyberspace-independence">Declaration of the Independence of Cyberspace</a>, warning corporations and governments that “You are not welcome among us. You have no sovereignty where we gather”. Skip forward 21 years and we are on the verge of <a href="https://www.wired.com/story/fcc-wants-to-kill-net-neutrality-congress-will-pay-the-price/">the end of Net Neutrality</a>, a clear signpost for the end of the period of massive innovation on an open web.</p>
<p>Walled gardens winning means many things. For publishers and media companies it means that their digital advertising revenue will decline, while they simultaneously face competition as publishers and content creators from Google, Facebook, and Amazon. For brands it means an even more challenging environment for connecting with existing and new customers, with a decline in control and visibility as you are forced to play on the walled garden’s terms.</p>
<p>New innovation is needed, and it will happen eventually. And while the trendlines all seem to point in one direction, I expect 2018 to be the last year of growth in developed markets for these “traditional” digital companies (but more on that in 12 months).</p>
<p>Confidence: 100%</p>
<hr />
<p class='date'>- December 2017</p>18 Things Part 2http://blonde3.com/writing/2017/171130_18_things_part_2/2017-11-30T00:00:00Z2017-11-30T00:00:00ZNic Hodges<p><em>18 Things is a slightly different take on the usual “Annual Predictions” posts. Over the next few weeks I’ll be posting 18 things I believe might happen in 2018, with a brief overview of why. Each will also have a confidence level (an idea nicked from <a href="http://slatestarcodex.com/2017/01/06/predictions-for-2017/">Scott Alexander</a>), so I can come back at the end of the year and see how I went. I’ve mostly stuck to my circle of competence, but please don’t read in to anything too much, and definitely don’t go betting your house on any of these…</em></p>
<hr />
<h2>7: The beginning of the end of the cult of disruption</h2>
<p>The boardroom buzzword of the decade is undoubtedly “disruption”. Businesses have been furiously working out how they can “disrupt themselves”, and watching in awe as the new disrupters disrupt entire industries with great disruption. </p>
<p>Any questioner of disruption is quickly labelled a Luddite, grasping hopelessly to the past. But increasingly we will realise that it’s important to ask the hard questions about where this is all headed. </p>
<p>Disruption was cool when it was happening to the hotels and the record labels and the taxi drivers. But now it’s happening to your job. And suddenly it’s not so cool.</p>
<p>An oft-cited <a href="https://www.oxfordmartin.ox.ac.uk/publications/view/1314">study</a> from Oxford suggest that 47% of jobs in the US are at risk from automation and computerisation. As this statistic becomes a reality for white-collar workers over the next year, <strong>it will become increasingly hard to find anyone who feels their career is secure</strong>.</p>
<p>(If you’re interested in more on this, I wrote <a href="http://blonde3.com/writing/2015/150911_schumpeterian_rubble/">this</a> in 2015, questioning our disruption obsession.)</p>
<p>Confidence: 80% (but I’m relying on you to be honest on if you feel secure in your job)</p>
<hr />
<h2>8: A top 20 global advertiser pulls out of Facebook or YouTube</h2>
<p>Over the past two years, brands have increasingly been <a href="https://www.businessinsider.com.au/two-of-the-worlds-biggest-brands-are-cutting-back-on-on-digital-ads-2017-6?r=US&IR=T">questioning</a> their ability to accurately measure and verify digital advertising, and to ensure their ads don’t appear next to extremist content. These questions are not being answered, and as a result <strong>expect to see a top 20 global advertiser pull their Facebook and/or YouTube ads for over a month in 2018</strong>.</p>
<p>It’s worth noting that this isn’t just a Facebook and YouTube issue. However, in Facebook’s case its continued refusal to allow meaningful third-party measurement is pretty much an own goal. For YouTube, it seems remarkable that a company that can build a self driving car can’t detect an Isis video. In 2017 (a.k.a “the year of the duopoly"), the unstoppable growth of Facebook and Google means they now are being forced to play by the grown-up rules. </p>
<p>We saw a glimpse of this in 2017 with a significant <a href="https://www.wired.com/2017/03/youtubes-ad-problems-finally-blow-googles-face/">pull-back from YouTube</a>. But when you look at the timeline of <a href="https://marketingland.com/heres-itemized-list-facebooks-measurement-errors-date-200663">failure</a> after <a href="https://marketingland.com/google-youtube-brand-safety-timeine-218317">failure</a> from both platforms, it’s clear that advertiser reactions have been undercooked. These issues are ones that simply wouldn’t fly in any other channel.</p>
<p>It’s possible that the final straw may be more revelations around the US election. Advertisers may finally have had enough not because of any political meddling, but because it is only under such a strong spotlight that we find out what Facebook and YouTube <em>haven’t</em> yet revealed.</p>
<p>But put away your party-poppers duopoly haters. Because even if multiple big advertisers pull out for multiple months, you won’t see the slightest hit to revenue numbers, and just the tiniest blip in share price.</p>
<p>Confidence: 70%</p>
<hr />
<h2>9: Blockchain’s first killer app beyond payments and currency</h2>
<p>While we’re heading towards the 9-year anniversary of the <a href="https://bitcoin.org/bitcoin.pdf">Bitcoin whitepaper</a>, it’s still the beginning of the beginning for blockchains. 2017 saw a flurry of innovation (and scams) in the space. In 2018 <strong>expect to see the first useful app with over 5 million active users built on top of a blockchain.</strong></p>
<p>Despite the rise of mainstream media coverage of Bitcoin, the actual innovation of blockchains is still inextricably linked to the virtual currency, and as a result usually misunderstood. Blockchains enable many things, but the most interesting is distributed computing. In 2015 <a href="https://www.ethereum.org">Ethereum</a> launched, combining a blockchain with a fully functioning programming language to create what founder Vitalik Buterin calls <a href="https://www.youtube.com/watch?v=gPq9ZZn5B-8">"The World Computer"</a>.</p>
<p>Where Bitcoin opened up the potential for basic financial apps, Ethereum opened up the potential for pretty much any app you can think of. In 2017 there was a flood of entrepreneurs doing just that, sparking the highly-scammy, probably illegal <a href="https://www.theguardian.com/technology/2017/sep/05/cryptocurrency-boom-stalls-as-regulators-focus-on-icos">ICO boom</a>.</p>
<p>If the blockchain space can get through its trough of disillusionment, there’s proper innovation coming up quickly. That innovation needs to move beyond payments and currency. But, I expect that the first killer app outside these areas to not stray too far, and will probably be built around some form of crowdfunding.</p>
<p>Confidence: 60% *(5m users means participants, not just speculators)</p>
<hr />
<h2>10: Commercialisation of AI will boom in the supervised learning domain</h2>
<p>Most mainstream media around Artificial Intelligence focuses on unsupervised and reinforcement learning (or misguided chat about Artificial General Intelligence). But it’s through far less sexy applications of AI that businesses without big budget R&D teams will begin to see significant value created through this technology. <strong>Expect to see at least a couple new $1B companies emerge providing AI-based SaaS products</strong> to data-rich businesses.</p>
<p><a href="https://en.wikipedia.org/wiki/Artificial_neural_network#Supervised_learning">Supervised learning</a> is one of the simplest approaches to neural networks, and the technique within artificial intelligence that has seen huge leaps forward this decade. Supervised learning is great at categorising things based on lots of previous examples of those same things. </p>
<p>Andrew Ng (previously leader of Baidu’s AI team and Google Brain) <a href="https://events.technologyreview.com/video/watch/andrew-ng-stanford-state-of-ai/">sums it up</a> well - anything that can be done by a human with less than 1-second of thought will soon be automated. The catch is you’ll need a lot of data to achieve this, so Ng also thinks it’s critical for businesses to have in place a data acquisition strategy to take advantage of this shift.</p>
<p>Confidence level: 70%</p>
<hr />
<h2>11: Regulation will cause one of the gig-economy unicorns to pull out of a whole country</h2>
<p>If you believe the hype, the gig-economy is one of the greatest gifts Silicon Valley has bestowed upon the world. People everywhere can earn money easily, no more pesky job applications, shifts, and bosses. The only problem is that the gig deal is heavily skewed in the favour of the tech companies. So <strong>2018 will likely see a Western country regulating, and one of the gig-economy unicorns pulling out of a whole country</strong>.</p>
<p>One of the defining (and powerful) features of online commerce is the ability to leverage wasted stuff. eBay let’s you sell your unwanted stuff to strangers. Facebook fills your wasted time with baby photos. AirBnB let’s you monetise your wasted spare room.</p>
<p>Mobile levelled-up this trend. If you own a car and a mobile phone, you can be an Uber driver. No car? Grab a bike and deliver some food for Deliveroo.</p>
<p>These things look a lot like jobs. But they’re not - just <a href="https://www.bloomberg.com/view/articles/2017-05-11/why-uber-s-struggling-to-remain-a-tech-company">ask the startups</a>! Your drivers and riders (and hosts) usually don’t get insured by their employer, they have no rights to minimum wages, and they definitely don’t get holidays <a href="http://www.abc.net.au/news/2017-10-19/gig-economy-workers-left-short-superannuation-guarantee/9063114">or super</a>. In Australia your Uber driver has to <a href="http://www.afr.com/technology/uber-loses-gst-fight-with-the-tax-office-20170215-gudwza">pay the GST</a> for your trip out of their own pocket.</p>
<p>2017 saw many cities start to question this arrangement - most notably Austin and <a href="https://www.theguardian.com/technology/2017/nov/10/uber-loses-appeal-employment-rights-workers">London</a>. The easy retort is that powerful taxi lobbies in the like are pulling the strings. And while that may be true, there is undeniably something not quite right about these gig-tech unicorns. </p>
<p>Confidence level: 80%</p>
<hr />
<h2>12: The world’s biggest ad fraud network uncovered</h2>
<p>There’s not a lot of people banging the drum about ad fraud, but it continues to be a huge problem. Like <a href="https://digiday.com/marketing/state-video-ad-fraud/">$13 billion</a> huge. I don’t expect marketers to care a lot more about ad fraud in 2018, but I do <strong>expect the largest ever ad fraud network to be revealed as being responsible for stealing more than $100m of ad budgets</strong>.</p>
<p>Ad fraud isn’t really just one thing. It’s a bunch of tactics that are constantly evolving. It’s also often quite technical, so it makes sense that the marketing blogs and trade news pretty much ignore the problem (interestingly, domain spoofing is relatively easy to explain, and seems to have <a href="https://digiday.com/?s=spoofing">gained a decent amount of press</a> this year).</p>
<p>What I find amazing is that ad-blocking regularly causes <a href="http://adage.com/article/digitalnext/ad-blocking-unnecessary-internet-apocalypse/300470/">mass-hysteria</a> with agencies and marketers. Yet ad-blocking doesn’t actually harm advertisers, it just makes audiences harder to reach. </p>
<p>But ad fraud <em>does</em> cost advertisers. The result of ad fraud is that organised criminal gangs end up with your marketing budget. It baffles me that more people don’t care about this.</p>
<p>As long as ad fraud remains the most profitable way to steal money on the internet (arguably only challenged by <a href="https://en.wikipedia.org/wiki/Ransomware">ransomware</a>), ad fraud will continue to advance. As it becomes more advanced, it will scale to phenomenal new heights. And still it’s unlikely many will care.</p>
<p>Confidence level: 80%</p>
<p><em>Postscript: A few days after this was written, researchers uncovered “Hyphbot”. An ad fraud network running since August that may have been generating around $500,000 per day.</em></p>
<hr />
<p class='date'>- November 2017</p>18 Things Part 1http://blonde3.com/writing/2017/171122_18_things_part_1/2017-11-22T00:00:00Z2017-11-22T00:00:00ZNic Hodges<p><em>18 Things is a slightly different take on the usual “Annual Predictions” posts. Over the next few weeks I’ll be posting 18 things I believe might happen in 2018, with a brief overview of why. Each will also have a confidence level (an idea nicked from <a href="http://slatestarcodex.com/2017/01/06/predictions-for-2017/">Scott Alexander</a>), so I can come back at the end of the year and see how I went. I’ve mostly stuck to my circle of competence, but please don’t read in to anything too much, and definitely don’t go betting your house on any of these…</em></p>
<hr />
<h2>1: 2018 is not the year of VR because there won’t be a year of VR</h2>
<p>The consumer use-case for VR still hasn’t been worked out, and the reality may be that there isn’t one. The hardware is pretty much there, and still nobody in the real world cares. While VR will continue to develop in niche professional areas, <strong>expect less than 20 million units to be shipped worldwide in 2018</strong> - despite lower price points.</p>
<p>Google and Facebook continue to push their hardware efforts forward - with Oculus Go <a href="https://www.theverge.com/2017/10/11/16459442/oculus-go-standalone-vr-headset-announce-pricing">announced</a> at $199, and the <a href="https://thenextweb.com/google/2017/10/04/google-debuts-updated-daydream-vr-headset/">updated Daydream</a> at $99. Both are backed by significant software efforts to build gaming and entertainment experiences, and in the case of Facebook also social VR. But in a world where the average person spends upwards of 3 hours a day watching TV and 40 minutes on Facebook, it’s hard to find anybody who even uses VR weekly.</p>
<p>The failure of consumer VR highlights the gap between Silicon Valley’s understanding of people and reality. Zuckerberg <a href="https://www.businessinsider.com.au/how-facebooks-oculus-go-santa-cruz-headsets-plan-to-make-vr-mainstream-2017-10">says</a> “We want to get a billion people in virtual reality”, but it’s probably worthwhile to start by finding a million people who think it’s a good use of their time.</p>
<p>All is not lost though. AR is the one to watch this year. Apple <a href="https://techcrunch.com/2017/06/05/apple-enters-the-augmented-reality-fray-with-arkit-for-ios/">released</a> ARKit mid-year, and there’s been a bunch of maybe-actually-useful demos appear since. There’s over 1 billion iOS devices in the world - even if we assume only 20% of those can run iOS11 (required for AR apps), it dwarfs the 10 million VR headsets sold in 2016.</p>
<p>Confidence level: 80%</p>
<hr />
<h2>2: The rise of a new tech giant on the wave of a new tech era</h2>
<p>We are now a decade in to the mobile era, and it’s time for the next big shift, which in turn will launch the next giant tech company. And so <strong>2018 may see a new business emerge that becomes the next $100B+ tech giant</strong>.</p>
<p>Mobile usurped Web 2.0 (which itself had a good 10-year run), allowing the growth and dominance of Google (Android, maps, mobile search), Facebook (a UX perfectly suited to mobile, plus cameras), Apple (hardware), and arguably Netflix and Amazon through the proliferation of screens.</p>
<p>It’s tempting to think that the last decade has seen immense innovation. But there are many <a href="https://www.theverge.com/2015/9/18/9351197/apple-google-microsoft-tech-innovation-uniformity">that</a> <a href="https://www.vox.com/new-money/2017/7/11/15929014/end-of-the-internet-startup">argue</a> these new giants have done as much to dull innovation as to drive it (just <a href="http://www.businessinsider.com/all-the-times-facebook-copied-snapchat-2017-5/?r=AU&IR=T/#facebooks-biggest-attack-on-snapchat-came-in-august-when-instagram-copied-snapchats-iconic-story-format-4">ask Snap</a>).</p>
<p>The next era will take us beyond mobile. For now, cognitive technology and IoT seem to be the leading contenders (sorry blockchain fans). It probably won’t be a case of one or the other - it’s likely to be both combined well. </p>
<p>This new giant will remain independent. It will succeed because it will not come with the baggage of a “traditional” tech company. And sorry brands, but it won’t be built on a foundation of ads.</p>
<p>Confidence level: 60% (although we may have to wait a while to find out)</p>
<hr />
<h2>3: GDPR will come in to effect with much confusion, shortly followed by a fine over €50 million for a data breach</h2>
<p>Over the past year it’s been clear that marketers are struggling to grasp the EU’s <a href="http://www.eugdpr.org">General Data Protection Regulation</a>, but they’re not the ones who should worry about getting fines.</p>
<p>GDPR kicks in on May 25, yet every time I’ve been in Europe this year it seems like the confusion around GDPR for marketers is only growing. Given the state of most CRM data I’ve ever come across, it’s unlikely many will be operating fully within the law come GDPR-Day.</p>
<p>But it’s not marketers that should be worried about getting fined. GDPR revolves around four key areas: consent (telling customers how and why you’re collecting data, and giving them the ability to see it), security (storing data in the right way), right to be forgotten (allowing customers to delete their data), and portability (allowing customers to take their data to another company). </p>
<p>Marketers will fail initially on the consent piece. But <strong>the first big (>€50 million) fine in 2018 will likely be for a data breach which includes personal information of millions of EU citizens</strong>. We’ve seen <a href="https://www.cnet.com/news/equifax-data-leak-hits-nearly-half-of-the-us-population/">over</a>, and <a href="http://www.zdnet.com/article/millions-verizon-customer-records-israeli-data/">over</a>, and <a href="http://fortune.com/2017/09/25/deloitte-hack/">over</a> again this year how likely data breaches are, and how they affect hundreds of millions of people. Companies don’t seem to be learning, so expect regulators to pull out a big stick to help them.</p>
<p>Confidence level: 70%</p>
<hr />
<h2>4: More huge failures for IoT security</h2>
<p>The Internet of Things was a buzzword that took a while to arrive, but now it’s here it turns out to be way more dangerous than expected. As more and more gadgets and widgets connect the internet to the real world, security remains an afterthought. It’s likely <strong>2018 will see IoT security continue to fail, with a single attack affecting more than 5 million devices</strong>.</p>
<p>Like so much else, we can blame smartphones for this one. The explosion of smartphones meant an explosion of cheap electronic components, particularly for networking. With the cost of adding wifi to an electronic device at a <a href="https://www.sparkfun.com/products/13678">couple dollars</a>, everything that can be connected is connected. Internet security legend Bruce Schneier paints <a href="http://www.eweek.com/security/schneier-it-s-time-to-regulate-iot-to-improve-cyber-security">a dark picture</a> - “Sensors are the eyes and ears of the Internet, actuators are hands and feet… We’re building a robot the size of the world and most people don’t even realize it.”</p>
<p>With potentially dozens of devices in a single home, there’s a lot of opportunities for security to fail. An 11 year-old <a href="https://www.theguardian.com/world/2017/may/17/boy-11-hacks-cyber-security-audience-to-give-lesson-on-weaponisation-of-toys">demonstrated</a> to a conference this year how easy it is to hack in to a connected teddy bear. At the other end of the scale, the <a href="https://en.wikipedia.org/wiki/Mirai_(malware)">Mirai</a> hack used millions of routers, cameras, printers, and other devices to create a botnet aimed at bringing down several high-profile websites. </p>
<p>As we move to a world where our power meters, cars, and even medical devices are online, something needs to change. No doubt 2018 will also see some talk of regulation of this space (most likely in the EU, where Germany has already <a href="https://www.bleepingcomputer.com/news/government/germany-bans-kids-smartwatches-classifies-them-as-illegal-spying-devices/">banned</a> some kids’ smartwatches), but we’ll likely end the year with things pretty much the same in this regard.</p>
<p>Confidence: 70%</p>
<hr />
<h2>5: Bitcoin will grow as a store of value</h2>
<p>Despite ongoing confusion and warnings from flummoxed bank <a href="https://www.bloomberg.com/news/articles/2017-09-12/jpmorgan-s-ceo-says-he-d-fire-traders-who-bet-on-fraud-bitcoin">CEOs</a>, <strong>bitcoin will continue to grow as a store of value, and finish 2018 with positive returns</strong>. </p>
<p>The last couple of years has seen significant change in what Bitcoin is, and why it’s succeeding at being that. The core promise of Bitcoin was that it was a global payment network. It has essentially failed at that goal, and instead has become a significant store of value. </p>
<p>Why the change? Bitcoin survives because of the almost magical structure of its mining incentives, combined with the network effects of being first to market. While the price of bitcoin is highly volatile, mining incentives act as bumpers that make it near impossible for a death-spiral to take hold.</p>
<p>As long as people want in on bitcoin, its value will grow. Bitcoin (and blockchains in general) require wrapping your head around concepts in computer science, cryptography, economics, sociology, and history. Over the past few years I’ve seen people perform this head wrapping - and once done they want in more often than not. There’s plenty more people yet to wrap their heads around it.</p>
<p>But it isn’t just individuals wrapping their heads around it. Hedge funds have <a href="https://www.nytimes.com/2017/11/06/technology/bitcoin-hedge-funds.html">finally</a> worked out how to get in, and have driven some of the volume and price growth this year, and institutional investors are <a href="https://www.reuters.com/article/us-investment-summit-novogratz/big-money-is-coming-to-bitcoin-ex-fortress-executive-novogratz-idUSKBN1DD2RE">itching</a> to get in.</p>
<p>When I’m asked if bitcoin is overvalued, my response is “how much should it be worth?”. The only correct answers are “more than it is now”, or “zero”. bitcoin is now a store of value with a market cap of $110bn. The current value of all bitcoin that will ever exist is less than the value of <a href="http://www.cmegroup.com/education/featured-reports/evolving-economics-of-bitcoin-gold-currencies.html">all gold mined</a> in 2017. Bitcoin has a lot of headroom, and a long line of people wanting in. </p>
<p>Confidence level: 90%</p>
<hr />
<h2>6: Facebook admits it <em>is</em> a media company. And so it launches a media company.</h2>
<p>If governments are going to start treating Facebook like a media company, <strong>Facebook will launch a media company</strong>.</p>
<p>With revenue of $40B, still growing at over 40% every year, there’s plenty of cash to splash around. At the moment Facebook (and Google) spend most of that cash on what they call “optionality” - making sure they’re the ones to invent the future. For Facebook this means <a href="https://techcrunch.com/2017/01/17/facebook-plans-to-invest-more-than-3-billion-in-vr-over-the-next-decade/">investing $3B</a> in to VR in the next 3 years, and probably even more than that in AI. So investing $1B in the creation of Facebook News Network isn’t too far fetched (that’s less than it costs to keep the New York Times <a href="http://www.nasdaq.com/symbol/nyt/financials?query=income-statement">running</a>).</p>
<p>In fact, Facebook News Network might not even need to cost $1B. Facebook is now responsible for <a href="http://mediashift.org/2016/12/facebook-referral-traffic-story-2016/">around 40%</a> of all referrals to digital publishers - so creating their own journalism that exists <em>only</em> within Facebook’s walls will increase the opportunity for displaying ads. </p>
<p>Of course FNN is impossible because regulation, no? The answer to that depends on who you expect to regulate. In the US, Facebook are starting to understand the power of <a href="https://www.recode.net/2017/10/21/16512414/apple-amazon-facebook-google-tech-congress-lobbying-2017-russia-sex-trafficking-daca">lobbying money</a>, and the fact is that Facebook - along with Google and Amazon - seem to increasingly exist <a href="https://www.wired.com/story/net-states-rule-the-world-we-need-to-recognize-their-power/">above the limits</a> of nation states.</p>
<p>Confidence level: 30% (which I realise is actually a bet <em>against</em> this happening, but I still think it’s an interesting possibility)</p>
<hr />
<p class='date'>- November 2017</p>Making Choices That Maximise Optionalityhttp://blonde3.com/writing/2017/170508_making_choices_that_maximise_optionality/2017-05-08T00:00:00Z2017-05-08T00:00:00ZNic Hodges<blockquote>
<p>“Things in life we love most — including life itself — are infinite games. When we play the game of life, or the game of the technium, goals are not fixed, the rules are unknown and shifting. How do we proceed? A good choice is to increase choices. As individuals and as a society we can invent methods that will generate as many new good possibilities as possible.” - Kevin Kelly, <a href="https://www.amazon.com/What-Technology-Wants-Kevin-Kelly/dp/0143120174/ref=sr_1_3?s=books&ie=UTF8&qid=1493183772&sr=1-3&keywords=kevin+kelly">What Technology Wants</a> (channeling James Carse - <a href="https://www.amazon.com/Finite-Infinite-Games-James-Carse/dp/1476731713/ref=sr_1_sc_1?ie=UTF8&qid=1493183738&sr=8-1-spell&keywords=finie+and+infitnie+games">Finite and Infinite Games</a>)</p>
</blockquote>
<p>I’m increasingly conscious of the decisions I make with technology. </p>
<p>This is not about avoiding new things. I’m not quite at the stage of yelling at people to get off my lawn.</p>
<p>It is about thinking through the longer term consequences of the choices we make.</p>
<p>I’ve been using the internet for 20 years now. New ideas have come along at a blistering pace, and my position has been to embrace pretty much everything by default.</p>
<p>The scale with which ideas enabled by the internet can impact our lives has, in the past few years, grown significantly - driven mainly through the pace of innovation and scale of production of mobile phones.</p>
<p>Today, our technology choices can have large and rapid ramifications in our real world. For internet enabled technology, this is a relatively new thing. And something I’m not sure we fully grasp yet.</p>
<p>The decisions we make through our choice of technology are often leading to global-scale monopolies. These monopolies have the power to shape many things - the future of jobs, economies, laws, regulations, culture, and societies. Some of this shaping is good. But a lot of it can be not good.</p>
<p>So I’m now putting more consideration in to the impact of my decisions on technology. Altering how I use technology to ensure that I’m doing my part to create a world that I actually want to live in.</p>
<p>A few of these choices are below. I’m not saying they’re right, but I do hope that through sharing them, more people will take some time to think about the choices they are making:</p>
<ul>
<li>I don’t use Facebook because I found the filter bubbles impossible to pop. The world Facebook presents to me is one where I learn little, and spend a lot. Alongside this is a huge opportunity cost - the average person spends around an hour a day on mostly pointless scrolling[1]. (it also turns out that there’s mounting evidence that <a href="https://www.nytimes.com/2017/05/06/opinion/sunday/dont-let-facebook-make-you-miserable.html">Facebook makes you miserable</a>)</li>
<li>I don’t use Instagram because it reflects something that is not quite reality. After using Snapchat for a while I switched to Instagram briefly, and could not believe how perfect everything (and everyone) looked. It reminded me of the Montesquieu quip… “If you only wished to be happy, this could be easily accomplished; but we wish to be happier than other people, and this is always difficult, for we believe others to be happier than they are.”</li>
<li>I use (and pay for) <a href="https://www.fastmail.com">FastMail</a> because it is a tiny cost to own and control my own email. Given how central email is to my life, I see no reason not to pay what is a trivial amount to not have it voluntarily surveilled.</li>
<li>I use <a href="https://duckduckgo.com">DuckDuckGo</a> because I am not completely comfortable with the surveillance economy that has been built beyond the scope of search ads. I’m totally fine with Google Search serving me ads in results based on my intent, but I’m not happy with where else that data is used. (On top of this, DuckDuckGo has super powerful !bang search operators which make it impossible to go back to Google)</li>
<li>I don’t use Uber because of their systematic immorality that extends far beyond challenging outdated regulation in the taxi industry. (I wish we had <a href="https://www.lyft.com">Lyft</a> in Australia, and I do use <a href="http://www.gocatch.com">GoCatch</a> despite what seems to be a frequent lack of cars)</li>
<li>I use iOS because Apple has consistently and thoughtfully balanced user privacy and data security.</li>
<li>I host my own writing because despite what any publishing startup promises, it might not be there tomorrow.</li>
</ul>
<p>[1] I do acknowledge that Facebook for many users is about keeping in touch, particularly with people overseas. If I was overseas (or a lot of friends or family were) maybe Facebook would play that role. Currently I keep in touch with friends overseas through email. It works pretty well.</p>
<p class='date'>- May 2017</p>We Need to Talk About the Black Boxeshttp://blonde3.com/writing/2016/161129_we_need_to_talk_about_the_black_boxes/2016-11-29T00:00:00Z2016-11-29T00:00:00ZNic Hodges<p>Last week saw two fascinating (and somewhat contradictory) announcements from Facebook. The <a href="https://www.bloomberg.com/news/articles/2016-11-23/facebook-s-quest-to-stop-fake-news-risks-becoming-slippery-slope">first</a> is that they will now work on code to detect and flag fake news, after months of claiming that this was a non-issue. The <a href="https://www.theguardian.com/technology/2016/nov/23/facebook-secret-software-censor-user-posts-china">second</a> is the news that Facebook are looking to finally enter China by implementing tools to allow third parties to censor content.</p>
<p>Behind these announcements (and many others recently from the world of Google, Twitter, and AI) lies a disturbing trend. Our world is increasingly being shaped by black boxes. These black boxes contain code that we cannot see. This code influences our news, entertainment, and relationships in ways that we’ll never understand.</p>
<p>In this sense, it’s never been more important to understand code. And over the past few years that importance has been met with dozens of “Learn to Code” manifestos and programs (even Obama was <a href="https://www.whitehouse.gov/blog/2014/12/10/president-obama-first-president-write-line-code">on board</a>). As we head into a new year, there will no doubt be calls for another “Year of Code”. </p>
<p>But that’s not going to solve the problem. The world doesn’t need to be full of coders any more than it needs to be full of dairy farmers or mechanics or economists. The number of people in the world who need to actually write code is very small. But the number of people in the world who need to understand how code affects their lives is very large. The challenge is not to teach people to “Hello World”, it is to explain how code affects us every day. And that is a much harder challenge.</p>
<p>There’s one thing missing from all the cheery “Learn to Code!” websites. Learning to code, properly learning to code to the point of actually understanding how powerful code can be, is actually quite hard. Yes, printing “Hello World” is relatively easy in most languages (<a href="http://www.aspnetbook.com/basics/using_vs_hello_world.php">most</a>). But the distance between that and understanding how code shapes our world is immense. </p>
<p>Fully learning a technology in order to understand how it shapes our world is a precedent that seems to be unique to computer programming. To take our dairy farmer as an example - every person drinking milk hasn’t gone off and spent a week on a farm understanding the entire process of grass –> cow –> milk. Let alone the packaging, logistics, and sales machines that gets that milk to your fridge.</p>
<p>Similarly the average person doesn’t spend years studying economics. But they do understand how interest rates affect their mortgage, and that if China slows down then Australia won’t have as much money to build roads and hospitals and schools. </p>
<p>The number of people who actually make a living from writing code is relatively small (and in the US, <a href="http://www.bls.gov/ooh/computer-and-information-technology/computer-programmers.htm">actually declining</a>). So teaching people to code doesn’t really make sense in this respect either.</p>
<p>So why has this push been so strong? Because people who do code understand the power they possess. And they want more people to be aware of that power. The push came from the right place, but has been executed in the wrong way. In some ways it’s even backfired, creating and amplifying a division between people who can code and people who can’t. People who are in control and people who aren’t. Every person who has started on their “year of code” only to drop off a week later has this view of the world reinforced.</p>
<p>The black boxes are only going to become more numerous and more complex. Every change to a Facebook or Google algorithm makes that black box more complex. Every new AI advance is another box we can’t open. </p>
<p>It’s more critical than ever that people understand how code shapes their world. That won’t happen by teaching everyone to code, but through giving everyone the language to ask the right questions about code.</p>
<p class='date'>- November 2016</p>Streambit Blockchain Powered Twitter Tip Bot for Wpp Streamhttp://blonde3.com/writing/2016/160419_streambit_blockchain_powered_twitter_tip_bot_for_wpp_stream/2016-04-19T00:00:00Z2016-04-19T00:00:00ZNic Hodges<p>In March I spent a few days in Phuket, at <a href="http://stream.wpp.com">WPP’s Stream Asia</a> conference. Stream is one of the best conferences in the world, combining great people, big thinking, and a lot of amazing ideas.</p>
<p>One of the highlights of Stream is The Gadgethon - an opportunity for participants to get up on stage and show off some piece of tech-nerdery they’ve built (or bought at the airport on the way in).</p>
<p>Knowing I’d be talking about Bitcoin and blockchains during the conference, and given messaging bots seem to be on-trend, I decided to build a blockchain-powered Twitter tip-bot, and launch it at The Gadgethon.</p>
<p>So <a href="http://streambit.co">Streambit</a> was born. Streambit is a bot that rewards participants at Stream for sharing. Those rewards have real value, in the form of Bitcoin.</p>
<hr />
<p>So how exactly does that work? Streambit constantly monitors Twitter, looking for interactions between people at Stream - a @mention, or a retweet. These types of interaction act as a proxy for identifying people who are adding value at Stream.</p>
<p>When Streambit sees an interaction, it transfers some bitcoin to the person or people who are being mentioned or retweeted. Each person at Stream starts with a small balance of bitcoin. So these tips aren’t being issued from a central server, but rather peer-to-peer. The tip flows directly from the account of the person who tweeted to the account of the person who was mentioned or retweeted. </p>
<p>While Streambit was pre-populated based on the Stream website - users could also register by tweeting “@streambit hello”. What this means is that every person on Twitter who was at Stream, actually now had a Bitcoin wallet created for them, with a small balance in it.</p>
<p>Further interaction with the bot was possible, with commands including:</p>
<ul>
<li>“@streambit balance” - which would return the current balance of your account in bitcoin (ie. starting balance-tips you’ve given out-tips you’ve received)</li>
<li>“@streambit deposit” - which would return a wallet address to deposit more bitcoin in to your account (for anyone who was super-keen on tweeting and got their account balance to zero)</li>
<li>“@streambit withdraw <address>” - which would move a users current balance to another wallet address which they controlled, allowing them to spend the bitcoin they had earned however they wanted.</li>
</ul>
<p>While being seamless, invisible and frictionless is great, being visible in some way is also important. So I threw together a simple <a href="http://streambit.co">dashboard</a> to show the Streambit most-tipped leaderboard, and recent tweets about Stream.</p>
<p><img src="http://blonde3.com/writing/2016/img/streambit.jpg" alt="" /></p>
<hr />
<h2>Why?</h2>
<p>The way that we recognise and reward content that we find valuable in the digital world is broken. </p>
<p>We need a way to acknowledge that we <em>all</em> get value, real value, from the abundance of content online.</p>
<p>We need a way to transfer that value. Even if it’s minuscule, we should be able to transfer real value in a seamless, frictionless, and invisible way. In the same way content flows across the world today, real value should flow in the future.</p>
<p><a href="https://en.wikipedia.org/wiki/Block_chain_(database)">Blockchains</a> are likely the first step of a future where this is possible. Blockchains do the frictionless and invisible transfer of value.</p>
<p>The seamless piece is still to be worked out. In the case of Streambit the seamlessness was achieved by limiting the idea to one small patch of online content (Twitter), and then making a few assumptions (all mentions and retweets were proxies for “I found that person interesting and valuable").</p>
<p>While Streambit was a small experiment, it’s not hard to see how these ideas expand to encapsulate more of our online lives in the near future, particularly around three key areas:</p>
<ol>
<li><strong>Invisible and silent value exchange, via computers simply talking.</strong> We’re still at the dial-up modem phase of value-exchange networks. Remember when you had to actually dial up to use the internet? Now it’s just always on, always there. Value-exchange networks will get to this point, where they are invisible and always on.</li>
<li><strong>A value exchange layer that is abstracted away from local currency.</strong> It doesn’t matter if it’s bitcoins or ethereum or widgetcoins, the currency of automated value networks will not be an existing local currency.</li>
<li><strong>A broad base of value creation and consumption opportunities.</strong> If people can earn and spend value within the network (ie. without “cashing out” to local currency), the network succeeds via network effects. People should be able to earn (and spend) via everything from sharing their WiFi to sharing a ride in their car to sharing a piece of music they’ve created.</li>
</ol>
<p class='date'>- April 2016</p>