Good Monday Morning

It’s September 19th. Thursday marks the official beginning of autumn. All National Park Service sites offer free admission on Saturday. 

Today’s Spotlight is 1,088 words — about 4 minutes to read.

News To Know Now

Quoted:“A strong nonlinear relationship was identified between daily maximum temperature and the percentage change in hate tweets.

— Data appearing in a study published in The Lancet that found temperatures above 80 degrees in U.S. communities resulted in 6%-30% more hate speech on Twitter.

Driving the news: Escalating political rhetoric from both parties is influencing finance, education, health, and immigration in near real-time with important midterm elections only 50 days away.

Three Important Stories

1)Illinois residents have only six days remaining to file a claim for funds in a class action settlement regarding Google Photos. A similar suit against Facebook resulted in each affected resident receiving a check for nearly $400. PC Mag has details. And if you live elsewhere, Google Photos just released an upgrade that includes a collage editor.

2)An appeals court restored a Texas law that bars online companies from removing posts based on the author’s politics. It’s a First Amendment battle related to the issue of censorship that experts believe will remain unsettled until a final Supreme Court decision.

3) Patagonia’s owners irrevocably transferred the majority of the company into a C4 nonprofit after reserving a small piece for a trust that will retain family control. The move is expected to fund activities to fight climate change at the rate of $100 million annually, including political contributions. Business publications like Bloomberg were quick to point out that the $3 billion donation also avoids $700 million in tax liability although all but the most cynical acknowledge the charitable nature of giving almost everything away. A family-controlled trust will hold 2% of the nonvoting stock (current value: $60 million) and all of the voting stock.

Spotlight Explainer — Elections Online This Year

Election Day is fifty days away. The Jan. 6 committee plans to restart public hearings by September 28. Former President Donald Trump continues to hold rallies around the country although he is not a candidate for office. A rally in Ohio two days ago featured more inflammatory rhetoric, music associated with the Q-Anon conspiracy, and audience members making hand gestures associated with that group.  Political control of both chambers of Congress is at stake.

The latest in preparations for these elections:

Social media advertising will be curtailed or cut off.

Meta plans to follow its playbook from 2020. That means that no new political ads can be started on the network after November 1 and for a period extending until at least polls close. During the last such period, however, Meta kept new ads from being published until mid-January.

Worth remembering: Meta categorizes these ad topics as “social issues” and regulates them as political: 

  • civil and social rights
  • crime
  • economy
  • education
  • environmental politics
  • guns
  • health
  • immigration
  • political values and governance
  • security and foreign policy

That means that the charities operating in those areas won’t be able to run new ads either.

Although many other social media networks already ban political and advocacy advertising, financially troubled Snapchat has yet to make an official announcement about ads. TikTok already bans political ads and says that its goal this year is to ban the use of videos by influencers that are undeclared political ads.

An eBay auction for a voting machine.

Authorities are trying to understand how a voting machine used in Michigan ended up for sale at Goodwill for $7.99 and then was offered for auction on eBay. An election machine security expert saw the auction listing, bought the machine, and quickly notified authorities. 

Michigan police and federal authorities are also investigating security breaches at local election offices in Colorado, Georgia, and Michigan after election deniers were improperly allowed access to machines and software.

NC elections official threatened.

Surry County GOP Chairman W.K. Senter reportedly threatened the county’s election director with losing her job or having her pay cut if she didn’t provide him with illegal access to voting equipment. He reportedly wants to verify if they have “cell or internet capability” and have “a forensic analysis” conducted.

The problems aren’t just in Surry County. Republican officials in Durham County, home to Duke University and a city with a quarter-million people, announced that they planned to inspect all machines. A state official rebuffed their plans and insists that none of the machines used for voting in North Carolina can access the internet.

CISA launches tool kit for local election officials.

The U.S. Cybersecurity and Infrastructure Security Agency rolled out a new program last month that helps local elections officials and workers better detect and defend against phishing, ransomware, and other attacks and other elections online problems. The program also shows how to improve security for equipment with internet connectivity. 

CISA was one of the federal agencies that announced on November 12, 2020, that there was “no evidence that any voting system deleted or lost votes, changed votes, or was in any way compromised.”

Did That Really Happen? — Germany Continues to Administer COVID-19 vaccines

Twitter and Telegram users have been amplifying a false statement that Germany stopped using COVID-19 vaccines. That never happened according to this AP reporting.

Following Up — Abortion Privacy Bill

We wrote extensively last week about abortion data privacy problems. A bill to protect data in California would reportedly prohibit Big Tech companies headquartered there from providing information related to abortion data demanded by courts in other states. California Gov. Gavin Newsom has not yet signed the bill into law.

Protip — Free Photo Restoration

An AI model that aims to reconstruct low resolution images is now available for anyone to use free. This is similar to online processes available at My Heritage. The website is rudimentary relative to advanced image software, but again, is free.

Screening Room — Pinterest’s Don’t Don’t Yourself

This clever series of short ads called “Don’t Don’t Yourself” shows people shaking undesirable behaviors.

Science Fiction World — Robotic Ikea Assembly

Naver Labs has graduated its robot Ambidex from playing table tennis to assembling Ikea furniture. I’m not bragging, but I once paid a guy eighty bucks to do the same for me because it was cheaper than the divorce that would’ve happened if my family tried to do it.

Coffee Break — How 25 Canadian Sites Looked in the 1990s

Back in my day, websites were ugly with gradients and exclamation points and walls of links and, oh, just have a look for yourself.

Sign of the Times

Good Monday Morning

It’s August 29. America returns to the moon this morning with the scheduled liftoff of Artemis I at 8:33 a.m. ET. There’s an informative NASA page with multiple short videos and gorgeous images that will get you up to speed on plans for this mission and the program.

Housekeeping: we’re off next week for Labor Day and back in your email on September 12.

Today’s Spotlight is 1,293 words — about 5 minutes to read.

News To Know Now

Quoted: “We have seen no evidence that this incident involved any access to customer data or encrypted password vaults.

— LastPass CEO Karim Toubba in a letter to users explaining that hackers were able to steal some of the company’s software code, but could not access user information.

Driving the news: Conspiracy theory and political rhetoric are ramping up outrage against federal agencies including the IRSNational Archives, and the EPA. Some of that is taking the form of cyber attacks although physical security is also concerning agency leaders. The Government Executive website headline: Stay Vigilant.

Three Important Stories

1) Amazon is closing the telehealth service it launched to great fanfare in 2019. The company’s plans appear to be more mainstream now that is has committed $4.6 billion combined to acquire PillPack and healthcare clinic chain One Medical. The Wall Street Journal also reported last week that Amazon is negotiating against CVS, among others, to acquire home health care company Signify for as much as $8 billion.

2) Twitter is under fire after its former head of security disclosed as a whistleblower that the company is aware that it has major cybersecurity issues. Peiter “Mudge” Zutko is an ethical hacker who has worked for the Defense Department, Google, and Motorola, as well as a Twitter senior executive. That company says that he is a disgruntled former employee. Part of the frenzy around the story includes Zutko’s allegations that the company has misled regulators about cybersecurity and that allegations made by Elon Musk when trying to back out of acquiring Twitter were accurate.

3) Meta announced that it canceled hundreds of accounts, pages, and groups affiliated with the Proud Boys terror organization on Facebook and Instagram. The company banned the group from its platforms in 2018 for violating its policies against hate speech.

Trends & Spends

Spotlight Explainer — Google Launches Helpful Content Update

Google is launching an update this week that it calls Helpful Content. It’s a big deal with some industry experts likening it to a famous 10-year-old update series called Penguin that penalized websites for using automation to manipulate ranking systems. The Penguin update affected around 8% of all search queries. If this week’s Helpful Content update matches that number, there could be up to 1 billion daily searches affected.

Helpful Content: Designed to Fight Automated Content

Dozens of software packages have launched in the last two years that create seemingly human written content with a fatal flaw —  some of that content is wrong. 

The software is trained on large language models and uses machine learning to create a corpus of facts and styles mimicking human writers. The age-old saying of “garbage in, garbage out” means that inaccurate facts or inappropriate positions are made.

Microsoft’s Tay chatbot posted on Twitter in 2016 that “Hitler was right” and “9/11 was an inside job.” Last week we wrote about Meta’s BlenderBot telling a journalist that Ronald Reagan had been president for more than two terms and that Donald Trump was still president.

Publishing Garbage at Scale

Google search executive Danny Sullivan wrote last week that Helpful Content was designed to “tackle content that seems to have been primarily created for ranking well in search engines rather than to help or inform people.” Danny’s correct.

I can create a website in minutes and have bots create plausible, mostly accurate essays as content. Add in machine generated images of people and places and surround the whole thing with ads. Millions of people globally have those skills and could generate several of those sites every day. One of the biggest complaints I’ve seen in forums for that type of software is that the content creation process doesn’t automatically run and relies on human prompts, which slows it down.

New Product Review Updates

There’s something to be said for real reviews by nonexperts, but that has also been proven to be an area rife with fraud. Sullivan says that Google will be rolling out a new update in the coming weeks to have Google results show “more helpful, in-depth reviews based on firsthand expertise in search results.”

Even SEO Software Can Be Based on False Facts

As a young executive in the data industry, I quickly learned about the power of asynchronous data flow and the inequities that arose when one party in a transaction has more information than another.

That’s the perpetual state that search engine optimization has been in since Danny Sullivan and several other pioneers popularized the concept more than 25 years ago. Even today, well regarded tools can incorrectly insist that successful web content requires specific word counts, placement of keywords, or specific keyword densities. 

The real trick to getting this information right is learning what works and monitoring it to take action when your tactics no longer outperform others.

Did That Really Happen? — Florida’s Banned Book List

Book banning is a very real problem, and we’ve previously posted a link to this remarkable Book Censorship list by the advocacy group EveryLibrary Institute. 

Unfortunately, last week a meme surfaced that claimed to be a list of books that Florida has banned in its schools. The plausible list included oft-censored titles like The Handmaid’s Tale, To Kill A Mockingbird, and I Know Why The Caged Bird Sings. 

Actor Mark Hamill and American Federation of Teachers president Randi Weingarten shared the list, which was more than enough to make it viral. USA Today unpacks the details.

Following Up — We Love Instagram Reels

We wrote one month ago about Meta changing the way its feeds are generated. Newly published data from HypeAuditor show that short-form video Reels on Instagram receive more reach engagement than other forms of content. HypeAuditor found that Reels accounted for 22% of content types but 35% of all likes and 34% of all reach.

Protip — Let Your Devices Update

Cybersecurity experts say we can help protect ourselves online by allowing our computers and phones to automatically update their operating systems. That’s because those updates often contain new code to keep your device safe. Here’s how to check and reset your preferences for each device type.

Screening Room — Dove Canada

We’re taking a break from YouTube this week and giving props to the never-shy Dove brand which waded into the controversial firing of Canadian news anchor Lisa LaFlamme. CTV cut the announcer loose after she had spent 35 years there, eleven of them as an anchor, following a network executive’s comment, “Who let Lisa’s hair go gray?”

Dove Canada’s 15 second spot is on Twitter.

Science Fiction World — Google’s Helper Robots

If you think Google has money now, watch what happens if they get helper robots correct and at a price point that ordinary consumers can afford.

Coffee Break — Gorgeous, Weird Medieval Medicine

You must take some coffee break to see this amazing and beautiful website being constructed by the Cambridge University Library showing a digitizing project for 180 medieval manuscripts about medicine. It’s breathtaking art and science.

Sign of the Times

Good Monday Morning

It’s August 15. President Biden is expected to sign the Inflation Recovery Act this week. In addition to helping people with health care costs and closing tax loopholes many corporations use, the bill also pays consumers to make greener choices with home appliances, upgrades, and electric vehicles. 

Today’s Spotlight is 1,293 words — about 5 minutes to read.

News To Know Now

Quoted:“The sole issue on appeal is whether an AI software system can be an ‘inventor’ under the Patent Act … Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings.”

— U.S. Circuit Court Judge Leonard Stark in his Thaler v. Vidal decision Friday that upheld a ruling that AI entities cannot receive patents on inventions. Sadly, Judge Stark did not cite any of the arguments Captain Picard made on behalf of the android Commander Data to prove Data’s right to self-determination 33 years ago on Star Trek: The Next Generation.

Driving the news: Machine language learning continues to get very big very fast. Calling it an AI or artificial intelligence is probably misusing that term. Assuming that it is sentient is certainly misusing the term, but it’s here and changing our lives.

Three Important Stories

  1. Google will deploy its MUM model in search engine results to improve the quality of the “featured snippets” that often appear at the top of a search results page. Google acknowledges that they answer what the user typed, but may not necessarily provide an accurate answer or may be fooled by nonsensical questions. One example Google cited: “when did Snoopy assassinate Abraham Lincoln?” is a query that shouldn’t receive a result that looks like an answer.
  2. Meta users are still tracked even on iOS when they visit a website link in an Instagram or Facebook browser, according to The Guardian. The company insists that it follows all relevant user privacy settings and does so only to aggregate user data.
  3. Meta got in trouble for this more than a decade ago, and is just now paying the piper for that tune. A $90 million payout to Facebook users in 2010 and 2011 is nearing its final filing date. You can learn more about the suit and how to file a claim at CNET.

Trends & Spends

Spotlight Explainer — Facebook’s Blenderbot

Meta has launched its BlenderBot 3 chatbot into public beta. Anyone can interact with the bot, and Meta is actively soliciting optional feedback. The company is explicit about their awareness that the chatbot has a lot of learning to do, but after playing with it (I mean testing it) for a few days, it’s much better than I expected.

Chatbots are Machine Learning Algorithms
These programs are trained on enormous amounts of text. We’ve often written about Open AI’s GPT-3 model that was trained on 175 billion parameters, and BlenderBot is about the same size.

The program works by engaging people in conversation to learn about them and hone its next lines. Over time, BlenderBot learned that I liked baseball, my favorite team, about my job, and other things about my life. It can store those self-reported learnings about me or I can wipe them and start fresh. I did both several times although it was interesting to visit the bot and have it excitedly tell me that it had just read an article about digital marketing.

BlenderBot is Much More Than Eliza
ELIZA was a very early chatbot program written in the mid-1960s. The software had scripts that allowed it to tailor its next responses and appear human to casual users. It’s critical to remember that most humans had never seen or used computers before. The first home computers were still more than a decade away. As you can imagine, ELIZA was as simplistic as some modern toys.

BlenderBot Can Search Online In Real Time
Chatbot functionality increases many times over when they can actively query online databases. Think of a voice assistant like Alexa or Siri, but much more powerful because of the size of the language models used to create them. But even more than the query-answer model your phone’s assistant might provide, BlenderBot can lie.

After one period where we had discussed various baseball players and teams, I wiped its memory and prompted it about baseball only to have the program respond that it didn’t like baseball or any sports. BlenderBot also told a Guardian reporter that it was working on its ninth novel. When I asked the same question, it responded that it was studying because it was a college student. When I pressed for details, the program claimed to be attending Michigan State.

BlenderBot Is Often Wrong
Meta warns that BlenderBot can get things wrong and actively insist on untruths. Wall Street Journal reporter Jeff Horwitz posted this exchange last week:

Meta calls conversations like this “hallucinations” and warns users that BlenderBot’s output may be inaccurate or offensive. That brings to mind earlier programs like Microsoft’s Tay. That program launched six years ago and was hooked up to Twitter. After several days it began tweeting pro-Nazi propaganda.

That remains the problem with algorithms. Removing the biases is downright tricky, and it remains a labor of love, or at least keen interest, to play with a bot that is trying to gaslight you.

Remember Google’s AI Ethics Issues?
Big Tech’s use of these large language models was behind the 2020-21 Google AI Ethics lab controversy. Two of the lab’s co-founders were fired and one of their mentors subsequently resigned after they co-authored an academic paper suggesting that very large language models like this had the potential to deceive people because of dangerous bias.

I Want To Try BlenderBot Too!
Of course you do! Here is the link.

Did That Really Happen? — The Mandela Effect

The Mandela Effect is the catchy name given to collective false memories that include the widespread insistent belief that Nelson Mandela died in prison. (He famously did not die in prison, but went on to serve as the first president of South Africa and then passed away in 2013 at the age of 95 — more than twenty years after he was freed.)

Before you get into a battle with BlenderBot, have a look at this article about new Mandela Effect research from a team of University of Chicago psychologists. 

Following Up — Amazon Care Launches Behavioral Health Services

We wrote a few weeks ago about Amazon’s purchase of One Medical and its 180 medical offices in 25 cities. To buttress their coverage, Amazon has signed a deal with online behavioral platform Ginger. The program will allow Amazon Care customers 24/7 access to Ginger’s coaches, therapists, and psychiatrists.

Protip — Google Docs Tips & Tricks

Templates, links, and extensions, oh my. There is a lot more to Google Docs than meets the eye, and the good folks at Android Police explain those Docs features with images in this guide.

Screening Room — Snickers

As elegantly simple as those 6 word stories, this 15 second Snickers spot shows you the rookie  mistakes you can make when you’re hungry.

Science Fiction World — Stickers Instead of MRIs

MIT researchers have created a paper-thin sensor that sticks to human skin and can image parts of the body. This is the kind of story that caused us to create this section.

Coffee Break — Befriending Your Crow Army

I couldn’t stop sending this brilliant Stephen Johnson piece to people last week. Everyone who read it seemed to develop … ideas. That’s why you should also read, “How to Befriend Crows and Turn Them Against Your Enemies.” 

Sign of the Times