Good Monday Morning


It’s August 28th. Spotlight is off next week for Labor Day. Enjoy your long weekend plans.

Today’s Spotlight is 1,254 words — about 4 1/2 minutes to read.

 3 Stories to Know

1. Test Scores Shared: Gizmodo reports that the College Board testing service shares SAT scores and GPAs with Facebook and TikTok via advertising tracking pixels. The College Board later confirmed the practice but denied sharing personally identifiable information, a common digital advertising practice.

2.Hackers Selling Info: Personal information is available for sale according to 404 Media which reported that hackers are using Telegram to sell credit header files for $15 per person. The data, sourced from credit bureaus, is advertised in criminal chat rooms for illicit activities including swatting and violence.

3. EU Targets Tech: The EU’s Digital Services Act (DSA) imposes new regulations on the 19 biggest tech companies with over 45 million monthly users in the EU. The DSA mandates the removal of posts containing illegal goods and bans targeted advertising based on sexual orientation, religion, ethnicity, or political beliefs. Noncompliance risks fines of up to 6% of global revenue.

Clarifying facial recognition: Last week, we reported on six nonwhite people wrongfully arrested solely due to facial recognition. At least two were jailed for up to one week. While the technology can and should initiate investigations, it shouldn’t be the sole basis for arrest as it was in those cases.

Spotlight on Politics Online: What’s Changing

 

Legal, technological, and social shifts have significantly changed the online political landscape since 2020.

1. Tech Trying to Protect Against Disinformation

Recent lawsuits have put tech giants Google and YouTube, both subsidiaries of Alphabet Inc., under scrutiny. Presidential candidate Robert F. Kennedy, Jr., filed a lawsuit against YouTube, accusing the platform of censoring his content that questions the safety of vaccines. Research has identified Kennedy as one of the nation’s top sources of vaccine disinformation.

Meanwhile, the Republican National Committee (RNC) sued Google, alleging that the search engine’s email system was suppressing conservative voices. A judge recently dismissed the RNC’s claims, stating that there was insufficient evidence to support the allegations of bias. 

2. AI and Political Bias

Allegations of political bias in AI technologies like ChatGPT have sparked considerable debate A recent study by Mandiant, a U.S. cyber firm owned by Google, reveals that AI is increasingly being used in online influence campaigns.

The study found that while AI can amplify messages and target audiences more effectively, its impact on changing public opinion is limited. These findings raise reasonable questions about the technology’s impartiality and ethical use. The involvement of Alphabet-owned entities in platform control, legal defenses, and research studies underscores the expansive influence of major tech players in shaping this discourse.

3. The Importance of Academia

The Australian Christian Lobby (ACL) has been vocal in opposing a bill toughening social media speech requirements in that country by claiming it poses a threat to religious freedom. However, that organization has been implicated in a misinformation campaign targeting the Labor Party by posting false narratives to influence public opinion. This incident underscores the global challenges of combating digital misinformation and the need for effective countermeasures.

Meanwhile, in the U.S., Joan Donovan, a leading expert on media manipulation and disinformation, was recently forced to leave her role at Harvard’s Shorenstein Center. Her departure came after administrative decisions ended her Technology and Social Change project. Donovan’s work has been a cornerstone in the study of online misinformation and has influenced both policy and platform moderation.

Her forced exit from Harvard raises questions about the future of academic research in this critical area, emphasizing the need for scholarly engagement to combat misinformation effectively. 

4. Meta Wants Fewer Political Posts

Meta’s Threads platform is taking steps to foster a friendlier online environment. By downgrading news and politics in its feed, the platform aims to create a space where users can connect and engage without the constant influx of divisive content.

This aligns with recent Pew Research findings that reveal Americans’ differing views on the impact of social media on U.S. democracy. The research underscores our deep ideological divide and highlights the evolving landscape of online discourse. There will be continuing changes as next year’s presidential election cycle gathers steam, and we’ll share that news as it happens.

Practical AI

Quotable“AI builders are using Hugging Face all day, every day … Maybe in five years, you’re going to have like 100 million AI builders. And if all of them use Hugging Face all day, every day, we’ll obviously be in a good position.”

Clement Delangue, Hugging Face CEO, whose company raised $235 million last week at a $4.5 billion valuation.

AI Books Flood Amazon: People are posting books for sale on Amazon that have been authored by generative AI. Biggest issues: they’re often inaccurate, cannibalize sales of human-written works, and can even falsely be identified as being written by well-known authors.

Tool of the Week: Hugging Face’s AutoTrain helps you train an AI model to learn a task. If you’re dabbling in machine learning, this no-code tool is a fine starting point.

Did That Really Happen — Dodger Stadium & Ted Cruz’s Shark

Flooding in Southern California led to many inaccurate claims. One viral photograph showed what appeared to be a flooded Dodger Stadium. A spokesperson said that the photo was an unfortunate optical illusion and that some areas of the stadium had pooled water of “maybe one inch.”

Another viral photo, an 18-year-old image of a shark superimposed on a highway, was retweeted by Sen. Ted Cruz (R-TX). Despite being told the photo was a hoax, Cruz refused to delete it and wrote, “In LA, you never know …” before expressing a hope that people stayed safe.

Following Up — Revenge (Fake) Porn 

We wrote last week about a Houston jury awarding more than $1 billion in damages to a woman who was the victim of explicit photos of her being released without her permission. 

Now there are details about a program that easily allows non-technical users to easily substitute faces into extreme porn images–using photos non-consensually scraped from online sources. Access to the very NSFW site is only $4 per month and the site’s owners claim a half-million users. (404 Media article – extreme language)

Protip — Gmail Templates

Gmail templates are one of my favorite time-savers. This ZD Net feature shows you how to set up your own.

Screening Room — Apple Helping Fit Animal Prosthetics 

Science Fiction World — Our Mars Lander Filmed Our Mars Helicopter
 

That would be Perseverance filming Ingenuity’s 54th flight on Mars for National Aviation Day. You can see the flight too–it lasts under one minute, which doesn’t sound impressive until you realize IT’S ON ANOTHER PLANET.

Coffee Break —  The Never-Ending Password Change

You won’t get as far as you think you might in Neal Agarwal’s latest interactive, the Password Game.  There are allegedly 35 steps. One day I hope to make it past the teens.

Sign of the Times

Good Monday Morning

It’s August 21st. Friday is the deadline to add your name to the Facebook privacy class action settlement. You qualify if you were a U.S. Facebook user between 2007 and 2022. Official website.

Today’s Spotlight is 1,175 words — about 4 1/2 minutes to read.

3 Stories to Know

1. Revenge Porn Case: A Houston jury awarded over $1 billion in a revenge porn case, possibly setting a legal precedent. The decision underscores the issue’s potential damage. A 2017 study by Data & Society Research Institute found that 1 in 25 Americans has experienced nonconsensual image sharing.

2.Musk Throttles web: Twitter, now known as X, is slowing traffic to sites including the New York Times, Facebook, and Instagram, forcing users on X to wait an additional five seconds after clicking a link. The action targets companies that have previously drawn owner Elon Musk’s ire. Some throttling has stopped, but there are lingering concerns about Musk’s influence over user access to information.

3. Time & Weather: Google’s Contacts app now displays weather and time info for your contacts’ locations. The new feature will help communications across time zones and can be a good ice breaker too.

Spotlight on What to Know About AI

Recap of Part 1

Last week, we unraveled the basics of Artificial Intelligence (AI), the groundbreaking technology reshaping our lives. From voice assistants to personalized recommendations, AI is becoming an integral part of everyday experiences. But there’s more to the story!
 

Mystery Behind AI Outputs

AI might seem magical, but behind the scenes, it’s a complex data science.

Think of AI as a black box where data goes in and intelligent decisions come out. What happens inside? Algorithms like decision trees sort data into categories, while neural networks, akin to a web of interconnected brain cells, process information through layers, refining it into smart actions. Human experts often check these processes to ensure fairness and accuracy.

Then there’s generative AI like ChatGPT or Google Bard, a sophisticated output that functions like an advanced autocomplete system. Curious about how machines learn? Here’s an explanation made simple.

Ethics & Controversy

Ethical controversies surround AI development and use. The litany includes bias, accountability, and ownership of the output.

Popular video conferencing platform Zoom faced significant backlash this month when it revealed plans to train AI algorithms using customer calls. The company publicly canceled those plans days later amid horrific brand damage. The incident has led to a larger conversation about consent and transparency in AI development.

Taking a bold step, the New York Times blocked AI training on its content, signifying a turning point in how organizations address AI interactions. A week later, Microsoft joined them, a remarkable move for the company that has invested $10 billion in OpenAI and ChatGPT. Their moves reflects growing concerns about how AI algorithms might misinterpret or misuse journalistic or technical content, and it has spurred other media outlets to evaluate their own policies. 

Fair compensation for source material, one of the internet’s biggest bugaboos, is at the heart of many disputes.  

Google’s approach to AI has led to several debates about ethics and fair practice. One significant controversy is their use of online content for AI training, sparking concerns about copyrights. By leveraging publicly available information without explicit permission, questions arise about intellectual property rights and fair use.

The 2020 termination of renowned AI ethicist Timnit Gebru from Google ignited a firestorm of criticism. Gebru was a prominent advocate for diversity in technology and raised critical questions about bias in AI, specifically the large language models now at the heart of Google Bard.

Her dismissal exposed underlying tensions within the tech community about freedom of speech, research integrity, and the responsible development of AI. Together, these controversies represent the complex intersection between technology, law, and ethics, with potential wide-reaching ramifications.

Today

AI is no longer a futuristic concept — it’s here today, impacting how we live, work, and interact. The journey into AI’s world uncovers innovations, challenges, and ethical dilemmas. As AI continues to evolve, so does our understanding of this fascinating technology. 

Our Practical AI section below covers each week’s highlights and news in this explosive new field.

Practical AI

QuotableRight now, with 1,000 hours of therapy time, we can treat somewhere between 80 and 90 clients. Can you treat 200, 300, even 400 clients with the same amount of therapy hours?

— Stephen Freer, Chief Clinical Officer of Ieso that oversees 650 therapists who may use AI to help with case documentation.

Google, Universal Negotiate: Google is in talks with Universal Music Group to negotiate a licensing agreement for using Universal’s music and videos to train Google’s AI models. The negotiations mark a new approach in machine learning, using media to enhance understanding of music and visual content. This partnership could set a precedent for collaboration between tech and entertainment industries in AI development.  Artists, especially striking Hollywood writers and actors, are keenly aware of this issue.

Tool of the Week: This free infographic is one of the best I’ve seen to help guide people on using ChatGPT, Bard, or other AI chatbots.

Did That Really Happen — Maui Misinformation

Dangerous misinformation circulated falsely claiming that Maui residents accepting FEMA assistance could lose their homes or property to the federal government, a claim The Associated Press debunked.

Conspiracy theorists also falsely claimed that former President Barack Obama’s home was untouched by fires in Hawaii, stirring up conservative outrage, until others pointed out that Obama’s Hawaii home is on a different island.

Following Up — Another Abuse of Facial Recognition

We’ve told you repeatedly about law enforcement agencies misusing facial recognition. There’s news about Porcha Woodruff, a pregnant woman from Detroit, who was wrongfully arrested for robbery and carjacking after an automated facial recognition search. Despite being visibly pregnant, she was handcuffed, held for 11 hours, and had her iPhone seized as evidence. Woodruff’s is the sixth instance where a Black person has been falsely accused of a crime by police misusing facial recognition.  

Protip — Reverse Image Search

Learn how to perform a reverse image search with ease using this step-by-step guide that can help you discover how to find the original source of an image, debunk fake photos, and identify objects, people, or locations in pictures using Google, Bing, or TinEye.

Screening Room — Country Crock’s Legendary Campaigns Loves Moms

Science Fiction World — New Ocean Floor Ecosystem 

Scientists using robots have uncovered an ecosystem thriving beneath the ocean floor. This previously unseen world, located in Earth’s crust, hosts diverse microbes that play a crucial role in the planet’s cycle.

Coffee Break — Steve Ballmer’s Hysterical 1986 Ad Parody 

Steve Ballmer, the world’s 10th richest person, was about 30 years old and rallying the Microsoft troops around their new operating system called Windows, when he made this amazing commercial parody.

Sign of the Times

Good Monday Morning


It’s August 14th. Here are agencies accepting donations to help Maui recover.

Today’s Spotlight is 931 words — about 3 1/2 minutes to read.

3 Stories to Know

1. Meta’s Legal ChallengeMeta wants a new hearing following a ruling that its ad-targeting system may have facilitated discrimination. The ruling allowed plaintiffs, including civil rights groups, to pursue claims of race, gender, and other bias in advertising. 

2.Acoustic Attack on Keystrokes: A newly discovered acoustic attack can steal data by detecting keystrokes with an alarming 95% accuracy. Using built-in microphones on devices, the method listens to the sounds of typing, translating them into the actual keys pressed. Heads up: this is elite level hacking, not someone looking for your Facebook password.

3. TikTok’s U.S. Retail Push: TikTok is venturing into e-commerce in the U.S. by selling made-in-China goods, aiming to leverage its massive user base. The move marks a significant shift for the platform and aligns with parent company ByteDance’s broader e-commerce ambitions.

Spotlight on What to Know About AI

This week, Spotlight begins to demystify the intricate world of AI, unpacking the terminology, unraveling the mechanisms, and delving into the current debates.

Essential Definitions

Machine Learning (ML): A method where computers analyze data to predict outcomes, like Netflix’s recommendation system.

Large Language Model (LLM): A specialized form of ML trained on vast text data, enabling tools like chatbots to communicate naturally with people.

Generative AI: AI models creating content like the text or images in Dall-E’s or MidJourney’s art generation.

The Human Touch in AI Training

Human involvement in AI extends far beyond mere data processing. It’s not just a handful of experts either; thousands of people worldwide are engaged in the meticulous task of labeling data. This process involves identifying and categorizing elements within images, texts, and other forms of data.

For instance, labeling various plant species in photos teaches an AI to recognize similar patterns, while in other cases, humans might be transcribing and annotating spoken language. This extensive human interaction underlines the complexity of AI and is essential to its success.

The magnitude of this human involvement also brings challenges, such as ensuring quality and dealing with the sheer volume of data that requires annotation. Some projects might involve millions of individual pieces of data, each one needing precise labeling. While self-teaching algorithms exist, they concern scientists due to potential inaccuracies, emphasizing the irreplaceable role of human insight.

Machine Learning vs. True AI

ML predicts and analyzes data, forecasting weather patterns, whereas true AI (AGI) would interpret the weather’s impact on daily life. While ML is prevalent in tools like Google’s search engine, AGI is still a concept from science fiction, epitomized by HAL 9000 from “2001: A Space Odyssey” or Data from “Star Trek”. 

Coming Next Week

Next week, Spotlight takes you further into the realm of AI, exploring ethical dilemmas, corporate strategies, and the latest corporate events at Zoom, Google, and the New York Times  that are impacting the industry.

Practical AI

QuotableThese tools don’t work. They don’t do what they say they do. They’re not detectors of AI.”
— Deborah Weber-Wultf, a member of a multi-university research group evaluating 14 software programs that purportedly detect AI output.

NYC’s AI Bias Law: New York City’s new law targets AI bias in hiring, requiring audits for discrimination. This pioneering legislation aims to promote transparency and fairness in employment, while again, keeping humans in the mix.

Tool of the Week: Lifewire provides a walk-through demonstrating how to use Google Bard in Google Sheets.

Did That Really Happen — Heinz Ketchup’s Sugar Content

A viral social media claim stated that Heinz Tomato Ketchup contains a staggering amount of sugar. Snopes found the claim partially true; while Heinz ketchup does contain sugar, the amount is in line with industry standards and clearly labeled on the packaging. 

Following Up — Amazon’s Packaging Revolution

We told you last week that Amazon is retrenching. Now there’s news that Amazon may ditch unnecessary outer packaging for its Prime customers, aiming to reduce waste and save on costs. There are significant savings from reduced materials and shipping weights if it works. The long-term impact on both Amazon’s bottom line and the environment is yet to be determined.

Protip — Venmo Privacy Alert

Venmo’s social feed can inadvertently expose sensitive financial details. If you’ve ever used the platform, follow Brian Chen’s advice to review your privacy settings to ensure that your personal information remains confidential.’

Screening Room — Hyundai puts its family SUV in Grand Theft Auto

Science Fiction World — New Force of Nature? 

Researchers at Fermilab near Chicago are closing in on the potential discovery of a fifth fundamental force of nature, building on findings from 2021. Evidence from the behavior of sub-atomic particles called muons suggests they are being influenced by an unknown force. This discovery, part of the ‘g minus two (g-2)’ experiment, may lead to a revolution in physics.

 Coffee Break — Smarter Than A Scammer?

The Washington Post has another great interactive quiz for you to test your wits against digital scammers.

Sign of the Times