Hidden Buttons, Dumb Password Rules, BLE Relay Attack, & Stealthy UEFI – PSW #775
In the Security News: Using HDMI radio interference for high-speed data transfer, Top 10 open source software risks, Dumb password rules, Grand Theft Auto, The false promise of ChatGPT, The “Hidden Button”, How a single engineer brought down twitter, Microsoft’s aim to reduce “Tedious” business tasks with new AI tools, The internet is about to get a lot safer, All that, and more!
We’d like to invite our listeners to be part of our 2023 SC Awards!
Our prestigious and competitive SC Awards program recognizes outstanding innovations, organizations, and leaders that are advancing the practice of information security. This year, there are awards in 36 categories up for grabs, including best IT security-related training program, innovator of the year, best SASE solution, and more. We’d love to see your company in the spotlight!
Visit securityweekly.com/scawards to submit your entries by March 20!
- 1. Using HDMI radio interference for high-speed data transfer
- 2. Introducing The Top 10 Open Source Software (OSS) Risks
- 3. NVD makes up vulnerability severity levels
- 4. A Day Later: Analyzing Biden’s National Cybersecurity Strategy
- 5. Dumb Password Rules
- 6. Electrify America bug opens hacking vulnerability concerns [Updated]
- 7. 40% of Log4j Downloads Still Vulnerable
- 8. Grand Theft Auto – A peek of BLE relay attack
- 9. This Researcher Steals Data With Noise and Light
- 1. Stealthy UEFI malware bypassing Secure Boot enabled by unpatchable Windows flaw
Nothing to see here...please move along.
- 2. U.S. Special Forces Want to Use Deepfakes for Psy-ops
“When it comes to disinformation, the Pentagon should not be fighting fire with fire,” says Chris Meserole, head of the Brookings Institution’s Artificial Intelligence and Emerging Technology Initiative. Um, it's kinda what we do.
- 3. Why Data Breaches Are Increasing, and 6 Ways Companies Can Avoid Them
The claim as to "why" is simply, "the failure to consider privacy and data protection from the earliest product design stage." I think there's more to it than that, but then again...maybe that's a good place to start.
- 4. Lessons Learned from FTC Enforcement Action Against BetterHelp
The Federal Trade Commission (FTC) is on a roll in its efforts to signal to the digital health industry that data privacy must be a priority. The FTC announced a consent decree with BetterHelp on March 2, 2023, to settle claims that the online mental health treatment company engaged in unfair and deceptive trade practices when it made website visitor information available to third parties for marketing and advertising purposes. The settlement highlights the growing risk associated with the use of third-party cookies and pixels on websites for companies that offer health services.
- 5. How will the government enforce the national cyber strategy?
Efforts to enact laws and regulations that impose greater responsibility on the technology sector aren’t likely to come quick or easy.
- 6. Where Compliance Falls Short: Taking a Proactive Approach to Risk in the Healthcare Industry
The newly released U.S. Cybersecurity Strategy points to more regulation in more places, and yet compliance alone doesn't seem to work. Is there a middle ground? or a different approach altogether?
- 7. ‘Password’ Still the Most Common Term Used by Hackers to Successfully Breach Enterprise Networks According to Specops 2023 Weak Password Report
Does this surprise anyone?
- 8. SPECOPS Weak Password Report 2023
Here's the actual report - and I didn't even have to submit any PII to get it!
- 9. Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears
Of course they are.
- 1. Noam Chomsky: The False Promise of ChatGPT
AI simply finds patterns in text and extrapolates from them. This is not how humans learn language at all, because humans reason about meaning and use language to express concepts, using concepts like truth and morality. This means that AI will be far less useful than its enthusiasts predict, because it is profoundly stupid.
- 2. Proof-of-Concept released for critical Microsoft Word RCE bug
The RTF parser in Microsoft Word has a heap corruption vulnerability that is triggered “when dealing with a font table (fonttbl) containing an excessive number of fonts (f###).” A Proof-of-concept exploit is small enough to fit in a Tweet. The vulnerability was assigned a 9.8 out of 10 severity score, with Microsoft addressing it in the February Patch Tuesday security updates along with a couple of workarounds.
- 3. Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears
People are using ChatGPT for business tasks. In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and their medical condition and asked ChatGPT to craft a letter to the patient's insurance company. The problem is that ChatGPT remembers requests and can repeat them back later to threat actors who perform training data extraction attacks".
- 4. Hospital’s water purification system stripped out chlorine, killing 3 patients
Water purification systems installed in two ice machines in a Boston hospital were supposed to make the water taste and smell better for patients on a surgery floor—but it ended up killing three of them. Because it stripped chlorine from the municipal tap water, it allowed bacteria normally found at low levels to flourish and form biofilms inside the machines.
- 5. Billions of Gmail users warned to click ‘hidden button’ that saves you losing everything in ‘bank drain attack’
The headline is FUDdy. The point is that there's a "Start my Security Checkup" button in Gmail that guides users through five steps to do things like adding account recovery options, setting up 2-Step Verification for extra account security, and checking your account permissions.
- 6. The internet is about to get a lot safer
Two new laws passed in the EU last year: the Digital Services Act (DSA) and the Digital Markets Act (DMA). They will force changes to content moderation, transparency, and safety features on Google, Instagram, Wikipedia, and YouTube over the next six months. The largest companies, with over 45 million active monthly users in the EU (or roughly 10% of EU population), are called “Very Large Online Platforms” (or VLOPs) or “Very Large Online Search Engines” (or VLOSEs) and will be held to the strictest standards of transparency and regulation. They will be required to assess risks on their platforms, like the likelihood of illegal content or election manipulation, and make plans for mitigating those risks with independent audits to verify safety.
- 7. The “Get cookies.txt” Chrome extension is now actively malware
This extension collects cookies from the browser, to help people download YouTube videos. But it sent those cookies to the developer for months before finally being removed from the Chrome Web Store. This Reddit thread shows how the malicious action was detected and dealt with.
- 8. POLYNONCE: A TALE OF A NOVEL ECDSA ATTACK AND BITCOIN TEARS
The authors applied a novel attack against ECDSA to datasets we found in the wild, including the Bitcoin and Ethereum networks. ECDSA signatures use a nonce, which should be random, but in practice is often generated by a weak pseudorandom number generator. If the nonces used for several signatures with the same private key are related by a simple polynomial, the private key can be extracted from the signatures. The authors found 762 Bitcoin private keys and found that a threat actor had already stolen the Bitcoin from those accounts.
- 9. How a single engineer brought down Twitter on Monday
On Monday, Twitter was broken; users could not open links or view images. This was caused by an API change, intended to shut down free access. Elon Musk has laid off so many staff, only one site reliability engineer was on the project. On Monday, the engineer made a “bad configuration change” that “basically broke the Twitter API”. Elon Musk was furious. This was at least the sixth high-profile service outage at Twitter this year.
- 10. Microsoft aims to reduce “tedious” business tasks with new AI tools
On Monday, Microsoft bundled ChatGPT-style AI technology into its Power Platform developer tool and Dynamics 365, Reuters reports. Affected tools include Power Virtual Agent and AI Builder, both of which have been updated to include GPT large language model (LLM) technology created by OpenAI. Power Platform is a development tool that allows the creation of apps with minimal coding. Dynamics 365 Copilot automates certain "tedious tasks," such as manual data entry, content generation, and note-taking.
- 11. A Vulnerability in Implementations of SHA-3, SHAKE, EdDSA, and Other NIST-Approved Algorithms
The SHA-3 code approved by NIST contains a buffer overflow vulnerability caused by an integer overflow. It affects all software projects that have integrated this code, such as Python and PHP. The root of the problem is a mixture of 64-bit and 32-bit integers in the calculation.
- 12. Protecting Android clipboard content from unintended exposure
Microsoft discovered that an old version of the SHEIN Android application periodically read the contents of the Android device clipboard and, if a particular pattern was present, sent the contents of the clipboard to a remote server. To prevent such unsafe practices, several improvements have been added to Android: only allowing the foreground app to access the clipboard, displaying a message the first time an app uses the clipboard, and clearing the clipboard after a period of time.
- 13. The privacy loophole in your doorbell
Police were investigating his neighbor. A judge gave officers access to all his Ring security-camera footage, including inside his home. There's an overly close relationship between Amazon Ring and police, and it's disturbing that such an overly-broad search warrant was approved. “They are part of an ever-expanding web of surveillance in communities across America,” Sen. Ed Markey (D-Mass.) said in a statement to POLITICO about Ring’s products. “I’ve been ringing alarms about this company’s threats to our privacy and civil liberties for years.”
- 14. The Waluigi Effect (mega-post)
Why do Large Language Models such as ChatGPT produce wrong answers? They are merely extrapolating from the input you enter. If your input sounds fantastic or fictional, they write fiction in the same genre. And even if your input is neutral, they only find the most common result in their training data, without regard to its truth. Adding resources and more training only males the model more efficient at finding and repeating common misinformation.
- 15. We Found 28,000 Apps Sending TikTok Data. Banning the App Won’t Help.
Joe Biden gave federal agencies 30 days to remove TikTok from government devices, but federal agencies must also “prohibit internet traffic from reaching the company.” Some 28,251 apps use TikTok’s software development kits, (SDKs), tools which integrates apps with TikTok’s systems—and send TikTok user data—for functions like ads within TikTok, logging in, and sharing videos from the app. A simple ban on the TikTok app itself is not going to stop data flowing to TikTok.
- 16. New Steganography Breakthrough Enables “Perfectly Secure” Digital Communications
A group of researchers has achieved a breakthrough in secure communications by developing an algorithm that conceals sensitive information so effectively that it is impossible to detect that anything has been hidden. A secret message can be concealed inside another message that is random, or readable text produced by AI. The modified message has identical statistical properties as the unmodified one; so no statistical test can detect the hidden message.