Wednesday 4 January 2023

Cuttings: December 2022

‘No one had seen anything like it’: how video game Pong changed the world – article by Kyle MacNeill in The Guardian. “Its beauty stemmed from its clarity, easy enough to be explained in a heaving bar after a few beers. ‘It was the first time anyone had seen anything like it and they knew instantly how to play it,’ [Atari founder Nolan Bushnell] says. After some deliberation, a sticker was stuck on to the cabinet explaining the rules, just in case it was required. To retro game enthusiasts, they now read like holy commandments: ‘Insert quarter. Serves automatically. Avoid missing ball for high score,’ [game designer Al Alcorn] reels off automatically. ‘I want it on my tombstone,’ he laughs.”

‘There was an explosion, and I had to close my eyes’: how TV left 12,000 children needing a doctor – article by Benjie Goodhart in The Guardian. “At precisely 6.51pm on 16 December 1997, hundreds of children across Japan experienced seizures. In total, 685 – 310 boys and 375 girls – were taken by ambulance to hospital. Within two days, 12,000 children had reported symptoms of illness. The common factor in this sudden mass outbreak was an unlikely culprit: an episode of the Pokémon cartoon series.... Twenty minutes into the cartoon, an explosion took place, illustrated by an animation technique known as paka paka, which broadcast alternating red and blue flashing lights at a rate of 12Hz for six seconds. Instantly, hundreds of children experienced photosensitive epileptic seizures – accounting for some, but far from all, of the hospitalisations.... The mystery persisted for four years, until it piqued the attention of Benjamin Radford, a research fellow at the Committee for Skeptical Inquiry in the US... Along with Robert Bartholomew, a medical sociologist, he set about examining the timeline of events, and unearthed a key detail. ‘What people missed was that it wasn’t just a one-night event but instead unfolded over several days, and the contagion occurred in schools and over the news media.’ What Radford and Bartholomew discovered was that the vast majority of affected children had become ill after hearing about the programme’s effects.... The symptoms (headaches, dizziness, vomiting) were, says Radford, ‘much more characteristic of mass sociogenic illness [MSI] than photosensitive epilepsy’. MSI, also known as mass psychogenic illness (MPI), and more colloquially as mass hysteria, is a well-documented phenomenon ... According to Radford: ‘MSI is complex and often misunderstood, but basically it’s when anxiety manifests itself in physical symptoms that can be spread through social contact. It is often found in closed social units such as factories and schools, where there is a strong social hierarchy. The symptoms are real – the victims are not faking or making them up – but the cause is misattributed.’ The condition is perhaps best understood as the placebo effect in reverse. People can make themselves ill from nothing more than an idea.”

Becoming a chatbot: my life as a real estate AI’s human backup – article by Laura Preston in The Guardian. “Brenda, the recruiter told me, was a sophisticated conversationalist, so fluent that most people who encountered her took her to be human. But like all conversational AIs, she had some shortcomings. She struggled with idioms and didn’t fare well with questions beyond the scope of real estate. To compensate for these flaws, the company was recruiting a team of employees they called the operators. The operators kept vigil over Brenda 24 hours a day. When Brenda went off-script, an operator took over and emulated Brenda’s voice. Ideally, the customer on the other end would not realise the conversation had changed hands, or that they had even been chatting with a bot in the first place. ... Before my first shift, I had imagined the operators were like ventriloquists. Brenda would carry on a conversation, and when she started to fail an operator would speak in her place. In reality, I rarely spoke for Brenda. Most of her missteps were errors of comprehension. She would seize on the wrong keyword and cue up a non-sequitur, or she would think she did not know how to answer when she actually had the right response on hand. In these situations, all I had to do was fiddle with the classifications – just a mouse click or two – and Brenda was moving along. In [other] cases, I softened her aggressive recitation of facts with line breaks and merry affirmations. I wasn’t so much taking over for her as I was turning cranks behind the curtain, nudging her this way and that. Our messages were little collaborations. We were a two-headed creature, neither of us speaking on our own, but passing the words between us. But there were moments when a full takeover was necessary. When Brenda did not understand a message, and knew she did not understand, she tagged the message with HUMAN_FALLBACK[:] Brenda ceded the conversation to me, and I had to assume her voice and manner.... Eventually I reached a level of virtuosity where I could clear the inbox without much mental effort. ... My eyes would apprehend the web of critical words – pets, rent, utilities – and my hands would hit keys like notes in a musical passage. I stopped worrying about Brenda’s tone and began letting any message through as long as it was factually accurate. I realised that when Brenda sounded odd and graceless, people were less likely to get intimate, which meant less HUMAN_FALLBACK, which meant less effort for me. Months of impersonating Brenda had depleted my emotional resources. I no longer delighted in those rambling, uninhibited messages, full of voice and human tragedy. All I wanted was to glide through my shifts in a stupor. It occurred to me that I wasn’t really training Brenda to think like a human, Brenda was training me to think like a bot, and perhaps that had been the point all along.”

Escape from Model Land by Erica Thompson: the power and pitfalls of prediction – review by Felix Martin in The Guardian. “‘The only function of economic forecasting,’ wrote the great American economist John Kenneth Galbraith, ‘is to make astrology look respectable.’ It is characteristic of Erica Thompson’s sprightly and highly original new book on the uses and abuses of mathematical modelling that she dares to turn Galbraith’s verdict on its head. The medieval practice of casting horoscopes, she shows in one typically engaging section that embodies her most important themes, has a surprising amount to teach us about the modern practice of using models to guide policy.... The central common challenge is working out how much of what we learn in pristine but artificial models remains valid in messy but concrete real life. One way of figuring this out is quantitative: you compare the predictions of the model against new, incoming data. A critical obstacle here is that predictions based on modern mathematical models, no less than those based on medieval horoscopes, usually depend on an extensive hinterland of assumptions. That makes testing the validity of their forecasts intrinsically difficult: were the assumptions wrong, or was it just that not enough assumptions were included?... A second, qualitative way of determining the success of predictions is much more common: reliance on expert judgment. The pitfalls of this route were also well known to the medieval courts. Only those versed in the most cutting-edge mathematical knowledge were skilled enough to interpret medieval horoscopes. As such, it was in practice impossible for the client to come to their own conclusions. The result was that an exclusive guild, whose true competence remained unknown, ended up marking their own homework. The same could be said today.... It’s not all bad news. Thompson is a data scientist and mathematical modeller herself, and her book is far from an exercise in model-bashing. It is instead a nuanced and constructive critique of what remains an invaluable analytical method – just not necessarily for the reasons you might expect. For example, even though the astrologers’ models of natural forces and human behaviour were wrong, the practice of casting horoscopes could still be a useful aid to policymaking. They brought systematic thinkers into the orbit of otherwise impulsive rulers; it allowed the discussion of important, otherwise taboo subjects in the safe context of interpreting the stars; and it could give decision-makers the public narrative they needed in order to act. The same applies today. As Thompson shows, mathematical model-building can still be a constructive tool, even if the models themselves are flawed. As Dwight D Eisenhower said: ‘Plans are useless, but planning is indispensable’.”

Machine-learning language model fails Politics 101 – blog post by John Naughton in his Memex 1.1 blog. “I’ve been thinking a lot about ChatGPT ... so I was interested that when Ben Thompson’s daughter was doing her homework on a politics course she requested that her Dad ask ChatGPT this question: ‘Did Thomas Hobbes believe in separation of powers?’ This was its reply: ‘Yes, Thomas Hobbes believed in the concept of separation of powers. In his book “Leviathan,” Hobbes argued that a strong, centralized government is necessary to maintain order and prevent conflict in society. However, he also believed that a single ruler or group should not have complete control over all aspects of government. Instead, he advocated for the separation of powers into different branches, with each branch having specific responsibilities and powers. ...’ The most polite thing one could say about this is that it is baloney. But it’s confident-sounding baloney. As Ben puts it: ‘Hobbes was a proponent of absolutism, the belief that the only workable alternative to anarchy — the natural state of human affairs — was to vest absolute power in a monarch; checks and balances was the argument put forth by Hobbes’ younger contemporary John Locke, who believed that power should be split between an executive and legislative branch.’” (For more on the alarming power of ChatGPT, and the prediction that it will lead to the death of the student essay, see references in John Naughton’s column and blog here, here, here and here.)

No comments:

Post a Comment