A more poetic AI

I’ve a number of posts here tagged AI and art, but not so many about its impact on music or poetry. Let’s put that right. But first (via It’s Nice That), a quick recap.

The A-Z of AIWith Google
This beginner’s A-Z guide is a collaboration between the Oxford Internet Institute (OII) at the University of Oxford and Google, intended to break a complex area of computer science down into entry-level explanations that will help anyone get their bearings and understand the basics.

Topics include bias and ethics, as well as quantum computing and the Turing test. Nothing about Shakespeare though.

This AI poet mastered rhythm, rhyme, and natural language to write like ShakespeareIEEE Spectrum
Deep-speare’s creation is nonsensical when you read it closely, but it certainly “scans well,” as an English teacher would say—its rhythm, rhyme scheme, and the basic grammar of its individual lines all seem fine at first glance. As our research team discovered when we showed our AI’s poetry to the world, that’s enough to fool quite a lot of people; most readers couldn’t distinguish the AI-generated poetry from human-written works.

I think they’re better off sticking to the visuals.

Beck launches Hyperspace: AI Exploration, a visual album with NASAIt’s Nice That
The project was made possible by AI architects and directors OSK, founded by artists Jon Ray and Isabelle Albuquerque, who began the project by asking, “How would artificial intelligence imagine our universe?” In answering this question it allowed the directors to create “a unique AI utilising computer vision, machine learning and Generative Adversarial neural Networks (GAN) to learn from NASA’s vast archives.” The AI then trained itself through these thousands of images, data and videos, to then begin “creating its own visions of our universe.”

Some of them can really hold a tune, though.

What do machines sing of?Martin Backes
“What do machines sing of?” is a fully automated machine, which endlessly sings number-one ballads from the 1990s. As the computer program performs these emotionally loaded songs, it attempts to apply the appropriate human sentiments. This behavior of the device seems to reflect a desire, on the part of the machine, to become sophisticated enough to have its very own personality.

Lastly, it’s good to see that you can still be silly with technology and music.

Human authors have nothing to fear—for now

A new AI language model generates poetry and proseThe Economist
But the program is not perfect. Sometimes it seems to regurgitate snippets of memorised text rather than generating fresh text from scratch. More fundamentally, statistical word-matching is not a substitute for a coherent understanding of the world. GPT-3 often generates grammatically correct text that is nonetheless unmoored from reality, claiming, for instance, that “it takes two rainbows to jump from Hawaii to 17”.

We’re doomed! #2

You can now boot a Windows 95 PC inside Minecraft and play Doom on it – The Verge
If you’ve ever wanted to build a real and working Windows 95 PC inside Minecraft, now is the time. A new VM Computers mod has been created for Minecraft that allows players to order computer parts from a satellite orbiting around a Minecraft world and build a computer that actually boots Windows 95 and a variety of other operating systems.

uDrunkMate on Reddit

Here’s another one, though all might not be as it seems.

Doom running in task manager with each CPU core as a pixel, supposedlyBoing Boing
In this footage, a supercomputer’s CPU cores — nearly 900 of them — are neatly lined up in the Task Manager. The Doom logo appears, generated by code that targets each core. Then Doom itself plays, each “pixel” generated by thrashing a core with just the right amount of busy work.

You’re right to be incredulous — the article goes on to explain how that video might be fake.

Free vintage computers

Well, models of them, at least.

Papercraft ModelsRocky Bergen
Construct the computer from your childhood or build an entire computer museum at home with these paper models, free to download and share.

Yes, there’s the Apple Macintosh, the BBC microcomputer and some Nintendo and Sega systems, but my faves are definitely the Commodore 64, complete with printer and paper, and the Sinclair ZX Spectrum with a TV and cassette recorder from the 1982 Dixons catalogue, no less. (via Boing Boing)

It’s been more years than I care to remember, but I can still hear and feel those keyboards—the clack of the Commodore’s and the squidge of those silly little Spectrum keys.

And for more retro goodness, how about two minutes of beeps and chimes from other old machines.

Windows All Startup Sounds on Piano – Bored Piano

New for old

iPad looks great in an SE/30 caseBoing Boing
At Reddit, mtietje posted this remarkable photo of their lockdown project: an iPad stand made using an old Macintosh SE/30. The display size is perfect for the 9.7″ iPad, and now that iPadOS can use mice they work much better as “normal” computers.

That looks very stylish. Perhaps I should give something like that a go with my old bits of Apple tech. I’ve got an iMac G3 and an iBook G4 with a dodgy keyboard.

They don’t make ’em like that anymore. But sadly, after languishing in the cellar for more years than I care to remember, they won’t boot up anymore either.

The tech left behind

I first read this on my phone and now here I am, blogging about it on my tablet.

Why laptops could be facing the end of the lineThe Conversation
Research shows that PC and laptop ownership, usage and importance have declined over the past three years, replaced largely by smartphones. A survey of internet users found just 15% thought their laptop was their most important device for accessing the internet, down from 30% in 2015, while 66% thought their smartphone was most important, up from 32%.

This has led some commentators to predict the slow death of the laptop because of young people’s preference for and greater familiarity with the devices in their pocket. But a survey by UK regulator Ofcom in 2017 also found there has also been a record rise in older people using smartphones and tablets.

Good riddance?

How your laptop ruined your lifeThe Atlantic
As laptops have kept improving, and Wi-Fi has continued to reach ever further into the crevices of American life, however, the reality of laptops’ potential stopped looking quite so rosy. Instead of liberating white-collar and “knowledge” workers from their office, laptops turned many people’s whole life into an office. Smartphones might require you to read an after-hours email or check in on the office-communication platform Slack before you started your commute, but portable computers gave workers 24-hour access to the sophisticated, expensive applications—Salesforce CRM, Oracle ERP, Adobe Photoshop—that made their full range of duties possible.

Mobiles, mobiles, mobiles. They’re practically compulsory these days.

tech-left-behind

Desktop vs Mobile vs Tablet market share worldwideStatCounter Global Stats
Mobile – 52.02%, Desktop – 45.29%, Tablet – 2.7%

But not all mobiles, though. Did you ever have a Blackberry, the one that perhaps started it all?

RIP Blackberry phones — you really f***ed us over, but that keyboard was greatThe Outline
Blackberry phones died a slow death throughout the 2010s, as people migrated to newer phones with a wider selection of functions, apps, and so on. But the Blackberry’s rise was marked by the cultural shift that is, I think, the greatest anxiety of the smartphone era: the rapid upswing in how much time we spend on our damn phones.

When Barack Obama became president in 2008, he famously fought for (and won) the right to keep using his Blackberry (the phone would become the official device given out by large swathes of the federal government, including Congress). And years before reverting to a “dumb phone” became a thing for trendsetters like Anna Wintour, magazine writers tried the same stunt to lessen their Blackberry usage. One Daily Mail headline from 2006 warned of a “Blackberry addiction ‘similar to drugs,’” describing the kind of behavior we now readily associate with social media and phones more generally (“One key sign of a user being addicted is if they focus on their Blackberry ignoring those around them.”).

The rise and fall of BlackBerryYouTube

Another GlobalStats chart that caught my eye was this one, illustrating Chrome’s dominance over the other browsers.

tech-left-behind-1

Browser market share worldwideStatCounter Global Stats
Chrome – 64.1%, Safari – 17.21%, Firefox – 4.7%, Samsung Internet – 3.33%, UC Browser – 2.61%, Opera – 2.26%

Here’s The Register’s take on that.

Why Firefox? Because not everybody is a web designer, sillyThe Register
The problem with thinking that the web would be better with only one browser is that it raises the question – better for who? Better for web designers? Maybe, but that’s a statistically insignificant portion of the people on the web. Better for users? How?

But for those of us that are/were, here’s a look back at a simpler time.

Old CSS, new CSSFuzzy Notepad
Damn, I miss those days. There were no big walled gardens, no Twitter or Facebook. If you had anything to say to anyone, you had to put together your own website. It was amazing. No one knew what they were doing; I’d wager that the vast majority of web designers at the time were clueless hobbyist tweens (like me) all copying from other clueless hobbyist tweens. … Everyone who was cool and in the know used Internet Explorer 3, the most advanced browser, but some losers still used Netscape Navigator so you had to put a “Best in IE” animated GIF on your splash page too. […]

Sadly, that’s all gone now — paved over by homogenous timelines where anything that wasn’t made this week is old news and long forgotten. The web was supposed to make information eternal, but instead, so much of it became ephemeral. I miss when virtually everyone I knew had their own website. Having a Twitter and an Instagram as your entire online presence is a poor substitute.

Indeed.

A quick word about data compression

Yes, data compression is a very technical subject but, as this Quartz Obsession summary shows, its roots are quite analogue.

Data compression
1867: Chicago Tribune publisher Joseph Medill argues for eliminating excess letters from the English language, like dropping the “e” in “favorite.” […]

1934: Tribune publisher Robert R. McCormick, Medill’s grandson, institutes compressed spelling rules; some stick (“analog,” “canceled”), some don’t (“hocky,” “doctrin”).

Can you hear that?

After that post about movie music being too loud for Hugh Grant and others, myself included, here’s an in-depth investigation into more noise pollution, this time of a quieter but more insidious kind.

Why is the world so loud?
Some nights, Thallikar couldn’t sleep at all. He started wearing earplugs during the day, and stopped spending time outdoors. He looked for excuses to leave town and, in the evenings, returned to his old neighborhood in Tempe to take his constitutionals there. As he drove home, he’d have a pit in his stomach. He couldn’t stop himself from making the noise a recurring conversation topic at dinner.

Not only was the whine itself agitating—EHHNNNNNNNN—but its constant drone was like a cruel mnemonic for everything that bothered him: his powerlessness, his sense of injustice that the city was ignoring its residents’ welfare, his fear of selling his home for a major loss because no one would want to live with the noise, his regret that his family’s haven (not to mention their biggest investment) had turned into a nightmare. EHHNNN. EHHNNNNNNNNN. EHHNNNNNNNNNNNN. He tried meditating. He considered installing new windows to dull the hum, or planting trees to block the noise. He researched lawyers. And he made one final appeal to the newly elected members of the Chandler city council.

The eventual cuplrit? CyrusOne, a massive data centre just down the road. It already looks enormous but, according to the slick promotional video, it’s set to get much larger.

just-turn-it-down

Lots of talk about security, air flow, redundancy and so on, but nothing about the effects of noise pollution on the neighbouring residential areas.

After a few other stops, we doubled back to concentrate on the area around CyrusOne. For more than an hour, we circled its campus, pulling over every so often. As the sun and traffic dropped, the intensity of the hum rose. The droning wasn’t loud, but it was noticeable. It became irritatingly noticeable as the sky dimmed to black, escalating from a wheezy buzz to a clear, crisp, unending whine.

“This is depressing,” Thallikar said as we stood on a sidewalk in Clemente Ranch. “Like somebody in pain, crying. Crying constantly and moaning in pain.”

We were silent again and listened to the data center moaning. Which was also, in a sense, the sound of us living: the sound of furniture being purchased, of insurance policies compared, of shipments dispatched and deliveries confirmed, of security systems activated, of cable bills paid. In Forest City, North Carolina, where some Facebook servers have moved in, the whine is the sound of people liking, commenting, streaming a video of five creative ways to make eggs, uploading bachelorette-party photos. It’s perhaps the sound of Thallikar’s neighbor posting “Has anyone else noticed how loud it’s been this week?” to the Dobson Noise Coalition’s Facebook group. It’s the sound of us searching for pink-eye cures, or streaming porn, or checking the lyrics to “Old Town Road.” The sound is the exhaust of our activity. Modern life—EHHNNNNNNNN—humming along.

How about we end with a more lyrical hum?

Philip Glass – Changing Opinion

Learning to drive is (still) difficult

An interesting visualisation of all the reasons why creating safe self-driving cars is harder than the hype would have us believe.

How does a self-driving car work? Not so great.
The autonomous vehicle industry has made lots of cheery projections: Robocars will increase efficiency and independence and greatly reduce traffic deaths, which occurred at the rate of about 100 a day for the past three years nationwide. But to deliver on those promises, the cars must work. Our reporting shows the technology remains riddled with problems.

There are flaws in how well cars can “see” and “hear,” and how smoothly they can filter conflicting information from different sensors and systems. But the biggest obstacle is that the vehicles struggle to predict how other drivers and pedestrians will behave among the fluid dynamics of daily traffic. […]

Gill Pratt, the head of the Toyota Research Institute, said in a speech earlier this year that it’s time to focus on explaining how hard it is to make a self-driving car work.

Who’s really in charge?

Money makes the world go round. But who’s making the money go round?

The stockmarket is now run by computers, algorithms and passive managers
The execution of orders on the stockmarket is now dominated by algorithmic traders. Fewer trades are conducted on the rowdy floor of the nyse and more on quietly purring computer servers in New Jersey. According to Deutsche Bank, 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input. Equity-derivative markets are also dominated by electronic execution according to Larry Tabb of the Tabb Group, a research firm.

Nothing to worry about, right?

Turing Test: why it still matters
We’re entering the age of artificial intelligence. And as AI programs gets better and better at acting like humans, we will increasingly be faced with the question of whether there’s really anything that special about our own intelligence, or if we are just machines of a different kind. Could everything we know and do one day be reproduced by a complicated enough computer program installed in a complicated enough robot?

Robots, eh? Can’t live with ’em, can’t live without ’em.

Of course citizens should be allowed to kick robots
Because K5 is not a friendly robot, even if the cutesy blue lights are meant to telegraph that it is. It’s not there to comfort senior citizens or teach autistic children. It exists to collect data—data about people’s daily habits and routines. While Knightscope owns the robots and leases them to clients, the clients own the data K5 collects. They can store it as long as they want and analyze it however they want. K5 is an unregulated security camera on wheels, a 21st-century panopticon.

whos-really-in-charge-4

But let’s stay optimistic, yeah?

InspiroBot
I am an artificial intelligence dedicated to generating unlimited amounts of unique inspirational quotes for endless enrichment of pointless human existence.

whos-really-in-charge-1

whos-really-in-charge-2

whos-really-in-charge-3

Don’t leave your computer unattended

Or this might happen.

Update faker
Update Faker allows you to “fake a system update”, it’s the perfect way to prank your friends, family members or colleagues. Especially when they’re working on something rather important.

Yes, it’s just a silly prank (reminds me a little of Hacker Typer), but you could see it as an important security/GDPR lesson.

Ever since the launch of updatefaker.com we’ve been flooded with positive feedback both online and in real life. And everyone who’s ever fallen victim to update faker will never leave their PC unattended again, which certainly is a good thing. You never know what bad things people are up to. This website is literally one of the least bad things that can happen to an unattended PC.

Generative art’s rich history on show

Artnome’s Jason Bailey on a generative art exhibition he co-curated.

Kate Vass Galerie
The Automat und Mensch exhibition is, above all, an opportunity to put important work by generative artists spanning the last 70 years into context by showing it in a single location. By juxtaposing important works like the 1956/’57 oscillograms by Herbert W. Franke (age 91) with the 2018 AI Generated Nude Portrait #1 by contemporary artist Robbie Barrat (age 19), we can see the full history and spectrum of generative art as has never been shown before.

Zurich’s a little too far, unfortunately, so I’ll have to make do with the press release for now.

Generative art gets its due
In the last twelve months we have seen a tremendous spike in the interest of “AI art,” ushered in by Christie’s and Sotheby’s both offering works at auction developed with machine learning. Capturing the imaginations of collectors and the general public alike, the new work has some conservative members of the art world scratching their heads and suggesting this will merely be another passing fad. What they are missing is that this rich genre, more broadly referred to as “generative art,” has a history as long and fascinating as computing itself. A history that has largely been overlooked in the recent mania for “AI art” and one that co-curators Georg Bak and Jason Bailey hope to shine a bright light on in their upcoming show Automat und Mensch (or Machine and Man) at Kate Vass Galerie in Zurich, Switzerland.

Generative art, once perceived as the domain of a small number of “computer nerds,” is now the artform best poised to capture what sets our generation apart from those that came before us – ubiquitous computing. As children of the digital revolution, computing has become our greatest shared experience. Like it or not, we are all now computer nerds, inseparable from the many devices through which we mediate our worlds.

The press release alone is a fascinating read, covering the work of a broad range of artists and themes, past and present. For those that can make the exhibition in person, it will also include lectures and panels from the participating artists and leaders on AI art and generative art history.

generative-arts-rich-history-on-show-1

Process Compendium (Introduction) on Vimeo

generative-arts-rich-history-on-show-2

generative-arts-rich-history-on-show-3

Use IT, don’t abuse IT

Two extremes of the use (no use, misuse) of technology in education.

Virtual computing
Ghanaian teacher Richard Appiah Akoto faced a difficult problem: He needed to prepare his students for a national exam that includes questions on information technology, but his school hadn’t had a computer since 2011.

So he drew computer screens and devices on his blackboard using multicolored chalk.

‘I teach computing with no computers’ – BBC News

The article continues:

After Akoto’s story went viral last March, Microsoft flew him to Singapore for an educators’ exchange and pledged to send him a device from a business partner. He’s also received desktops and books from a computer training school in Accra and a laptop from a doctoral student at the University of Leeds.

Meanwhile…

Government pledge to ‘beat the cheats’ at university
In the first of a series of interventions across the higher education sector, Damian Hinds has challenged PayPal to stop processing payments for ‘essay mills’ as part of an accelerated drive to preserve and champion the quality of the UK’s world-leading higher education system.

Hinds calls on students to report peers who use essay-writing services
The true scale of cheating is unknown, but new technology has made an old problem considerably easier. In 2016, the higher education standards body, the Quality Assurance Agency (QAA) found about 17,000 instances of cheating per year in the UK, but the number of students using essay-writing services is thought to be higher as customised essays are hard to detect. A study by Swansea University found one in seven students internationally have paid for someone to write their assignments.

Electronic embroidery

As if computers weren’t complicated enough already.

A programmable 8-bit computer created using traditional embroidery techniques and materials
The Embroidered Computer by Irene Posch and Ebru Kurbak doesn’t look like what you might expect when you think of a computer. Instead, the work looks like an elegantly embroidered textile, complete with glass and magnetic beads and a meandering pattern of copper wire. The materials have conductive properties which are arranged in specific patterns to create electronic functions. Gold pieces on top of the magnetic beads flip depending on the program, switching sides as different signals are channeled through the embroidered work.

electronic-embroidery-1

See also The 200 Year Old Computer for more connections between thread and computing.

Pictures under glass

Following on from yesterday’s post about Joe Clark’s frustrations with various aspects of iPhone interface design (and smartphone design more broadly, I think), here are a few more.

First, Craig Mod on the new iPads — amazing hardware, infuriating software.

Getting the iPad to Pro
The problems begin when you need multiple contexts. For example, you can’t open two documents in the same program side-by-side, allowing you to reference one set of edits, while applying them to a new document. Similarly, it’s frustrating that you can’t open the same document side-by-side. This is a weird use case, but until I couldn’t do it, I didn’t realize how often I did do it on my laptop. The best solution I’ve found is to use two writing apps, copy-and-paste, and open the two apps in split-screen mode.

Daily iPad use is riddled with these sorts of kludgey solutions.

Switching contexts is also cumbersome. If you’re researching in a browser and frequently jumping back and forth between, say, (the actually quite wonderful) Notes.app and Safari, you’ll sometimes find your cursor position lost. The Notes.app document you were just editing fully occasionally resetting to the top of itself. For a long document, this is infuriating and makes every CMD-TAB feel dangerous. It doesn’t always happen, the behavior is unpredictable, making things worse. This interface “brittleness” makes you feel like you’re using an OS in the wrong way.

How we use the OS, the user interface, is key. Here’s Bret Victor on why future visions of interface design are missing a huge trick – our hands are more than just pointy fingers.

A brief rant on the future of interaction design
Go ahead and pick up a book. Open it up to some page. Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.

Go ahead and pick up a glass of water. Take a sip. Notice how you know how much water is left, by how the weight shifts in response to you tipping it

Almost every object in the world offers this sort of feedback. It’s so taken for granted that we’re usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.

There’s a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called “work” for millions of years.

Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit. What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

And that was written in 2011. We’ve not got any further.

The YouTube video he links to isn’t there anymore, but this one from Microsoft works just as well.

Microsoft: Productivity Future VisionYouTube

Technology can’t stand still (unfortunately)

Using Proterozoic geology as his unusual starting point, MIT Media Lab Director Joi Ito takes a look at the past, present and future of the web and cultural technology.

The next Great (Digital) Extinction
As our modern dinosaurs crash down around us, I sometimes wonder what kind of humans will eventually walk out of this epic transformation. Trump and the populism that’s rampaging around the world today, marked by xenophobia, racism, sexism, and rising inequality, is greatly amplified by the forces the GDE has unleashed. For someone like me who saw the power of connection build a vibrant, technologically meshed ecosystem distinguished by peace, love, and understanding, the polarization and hatred empowered by the internet today is like watching your baby turning into the little girl in The Exorcist.

And here’s a look into the technological future with analyst Benedict Evans.

The end of the beginning
The internet began as an open, ‘permissionless’, decentralized network, but then we got (and indeed needed) new centralised networks on top, and so we’ve spent a lot of the past decade talking about search and social. Machine learning and crypto give new and often decentralized, permissionless fundamental layers for looking at meaning, intent and preference, and for attaching value to those.

The End of the Beginning
What’s the state of not just “the world of tech”, but tech in the world? The access story is now coming to an end, observes Evans, but the use story is just beginning: Most of the people are now online, but most of the money is still not. If we think we’re in a period of disruption right now, how will the next big platform shifts — like machine learning — impact huge swathes of retail, manufacturing, marketing, fintech, healthcare, entertainment, and more?

Art and AI #3

A very interesting follow-up to that story about the first artwork by an AI to be auctioned. It seems the humans behind the AI, Hugo Caselles-Dupré and the Obvious team, have had to face some considerable criticism.

The AI art at Christie’s is not what you think
Hugo Caselles-Dupré, the technical lead at Obvious, shared with me: “I’ve got to be honest with you, we have totally lost control of how the press talks about us. We are in the middle of a storm and lots of false information is released with our name on it. In fact, we are really depressed about it, because we saw that the whole community of AI art now hates us because of that. At the beginning, we just wanted to create this fun project because we love machine learning.” […]

Early on Obvious made the claim that “creativity isn’t only for humans,” implying that the machine is autonomously creating their artwork. While many articles have run with this storyline, one even crediting robots, it is not what most AI artists and AI experts in general believe to be true. Most would say that AI is augmenting artists at the moment and the description in the news is greatly exaggerated. […]

In fact, when pressed, Hugo admitted to me in our interview that this was just “clumsy communication” they made in the beginning when they didn’t think anyone was actually paying attention. […]

As we saw with Salvator Mundi last year and with the Banksy last week, the most prestigious auction houses, like museums, have the ability to elevate art and increase its value by putting it into the spotlight, shaping not only the narrative of the work, but also the narrative of art history.

art-and-ai-3-2

IRC is 30 years old

I was never really nerdy enough to properly join in with this at the time, but it’s an interesting stroll down memory lane nevertheless.

On its 30th anniversary, IRC evokes memories of the internet’s early days
I used IRC in the early 1990s, when there were all kinds of fun things to do. There was a server with a bot that played Boggle. I was the know-it-all music snob who got kicked out of a chat channel someone set up at Woodstock ’94. I created keyboard macros that spewed out ASCII art. I skipped Mike Tyson’s pay-per-view boxing match in 2006 to watch someone describe it on IRC.

<jon12345> lewis connects again
<jon12345> arg
<jon12345> on the ropes
<CaZtRo> HES GOIN DOWN
<CaZtRo> tyson is DOWN
<DaNNe_> no!
<CaZtRo> DOWN DOWN DOWN
<DaNNe_> why ..

Internet Relay Chat turns 30—and we remember how it changed our lives
There was a moment of silence, and then something odd happened. The channel went blank. The list of users disappeared, and NetCruiser politely played the Windows alert chime through the speakers. At the bottom of the IRC window, a new message now stood alone:

“You have been kicked from channel #descent for the following reason: fuck off newbie”

I guess the Internet of 1995 wasn’t that different from the Internet of 2018.

Straightforward data science intro

This looks to be an interesting response to the call to be more data literate. Via Flowing Data, a straightforward and potentially free way to get skilled up with R, without needing to install any software, it seems.

Chromebook Data Science – a free online data science program for anyone with a web browserSimply Statistics
The reason they are called Chromebook Data Science is because philosophically our goal was that anyone with a Chromebook could do the courses. All you need is a web browser and an internet connection. The courses all take advantage of RStudio Cloud so that all course work can be completed entirely in a web browser. No need to install software or have the latest MacBook Computer.

Here’s some info on what the courses cover, including introductions to R and GitHub. Worth a look?

So, farewell then, GeoCities. Again

Ten years after it shut down for the rest of us, Yahoo Japan has finally pulled the plug on its GeoCities service.

Yahoo Japan is shutting down its website hosting service GeoCities
The company said in a statement that it was hard to encapsulate in one word the reason for the shut down, but that profitability and technological issues were primary factors. It added that it was full of “regret” for the fate of the immense amount of information that would be lost as a result of the service’s closure. […]

The fact that GeoCities survived in Japan for so long speaks to the country’s idiosyncratic nature online. Despite the fact that Yahoo—which purchased GeoCities in 1999 for almost $4 billion at the peak of the dot.com boom—has fallen into irrelevance in much of the world, the company continues to be the dominant news portal in Japan. It still commands a sizeable market share in search, though it has steadily ceded its position to Google over the years.

So it goes.