In brief Elon Musk wants Tesla’s robot butler to be able to cook, mow lawns, and care for the elderly, he wrote in an essay published in a magazine backed by the official Cyberspace Administration of China.
The billionaire has previously hinted a prototype of the model, named Optimus, may debut at the end of next month at the company’s annual AI day.
Plans to develop a humanoid robot were announced last year, when Tesla hilariously hired a man dressed in a skin-tight suit to dance around and act like a robot on stage. You can watch the farce below.
“Tesla Bots are initially positioned to replace people in repetitive, boring, and dangerous tasks,” Musk wrote in the essay. “But the vision is for them to serve millions of households, such as cooking, mowing lawns, and caring for the elderly.”
Optimus will be about the same size and build as an average adult and will be able to “carry or pick up heavy objects, walk fast in small steps, and the screen on its face is an interactive interface for communication with people.” he added. We’ll believe it when we see it.
How does the Department of Homeland Security use AI?
The US Department of Homeland Security this week publicly revealed a list of the AI technologies it deems as non-classified and non-sensitive that are used by its agencies.
The information was released to comply with Executive Order 13960, signed in 2020 by President Donald Trump, aimed at promoting the federal government’s use of trustworthy AI.
The list covers the US Citizenship and Immigration Services (USCIS) using computer vision to assess the quality of fingerprint scans, and the Transportation Security Agency deploying the PageRank algorithm to figure out the most popular airports that could be COVID-19 hotspots based on “historical non-PII travel data.”
Homeland Security also uses a lot of natural language processing too. Multiple agencies, including the Cyber and Infrastructure Security Agency, the Immigration and Customs Enforcement, and Customs and Border Protection, for example, uses it for so-called sentiment analysis, automatically translating text, and detecting personally identifiable information in documents. USCIS also has a system designed to analyze applications from asylum seekers to detect fraudulent cases by looking for signs of plagiarism.
You can see the whole list published here.
AI art features on social media
The ability to generate Midjourney-like images from text prompts is being integrated in social media platforms.
TikTok has rolled out what it calls an “AI greenscreen.” Users can type a text description of an image, a model working in the background will create and display it in the app to use as a prop or as a background.
The images aren’t very realistic, and there are limits to what can and can’t be generated. The model wasn’t good at creating an accurate picture of US President Joe Biden, nor UK Prime Minister Boris Johnson, and it was also quite rubbish at depicting nudity, too.
“The limitations of TikTok’s model may well be intentional,” The Verge noted. “First, more advanced models require greater computing power, which would be expensive and resource-intensive for the company to implement. Secondly, TikTok has more than a billion users, and giving all these individuals the power to create photorealistic images of anything they can imagine would almost certainly produce some troubling results.”
Text-to-image models will no doubt feature on more social media platforms. Meta’s CEO Mark Zuckerberg previously demonstrated a similar ability in the company’s Metaverse VR world. ®