Dec 07

samleecole shares a report from Motherboard: A lineup of female celebrities stand in front of you. Their faces move, smile, and blink as you move around them. They’re fully nude, hairless, waiting for you to decide what you’ll do to them as you peruse a menu of sex positions. This isn’t just another deepfake porn video, or the kind of interactive, 3D-generated porn Motherboard reported on last month, but a hybrid of both which gives people even more control of women’s virtual bodies. This new type of nonconsensual porn uses custom 3D models that can be articulated and animated, which are then made to look exactly like specific celebrities with deepfaked faces. Until recently, deepfake porn consisted of taking the face of a person — usually a celebrity, almost always a woman — and swapping it on to the face of an adult performer in an existing porn video. With this method, a user can make a 3D avatar with a generic face, capture footage of it performing any kind of sexual act, then run that video through an algorithm that swaps the generic face with a real person’s.

Read more of this story at Slashdot.

full article

Dec 06

The WebAssembly Working Group has published today the three WebAssembly specifications as W3C Recommendations, marking the arrival of a new language for the Web which allows code to run in the browser. From a report: WebAssembly Core Specification defines a low-level virtual machine which closely mimicks the functionality of many microprocessors upon which it is run. Either through Just-In-Time compilation or interpretation, the WebAssembly engine can perform at nearly the speed of code compiled for a native platform. A .wasm resource is analogous to a Java .class file in that it contains static data and code segments which operate over that static data. Unlike Java, WebAssembly is typically produced as a compilation target from other programming languages like C/C++ and Rust.

WebAssembly Web API defines a Promise-based interface for requesting and executing a .wasm resource. The structure of a .wasm resource is optimized to allow execution to begin before the entire resource has been retrieved, which further enhances responsiveness of WebAssembly applications.

WebAssembly JavaScript Interface provides a JavaScript API for invoking and passing parameters to WebAssembly functions. In Web browsers, WebAssembly’s interactions with the host environment are all managed through JavaScript, which means that WebAssembly relies on JavaScript’s highly-engineered security model.

Read more of this story at Slashdot.

full article

Dec 05

An anonymous reader writes: Security researchers found a new vulnerability allowing potential attackers to hijack VPN connections on affected *NIX devices and inject arbitrary data payloads into IPv4 and IPv6 TCP streams. They disclosed the security flaw tracked as CVE-2019-14899 to distros and the Linux kernel security team, as well as to others impacted such as Systemd, Google, Apple, OpenVPN, and WireGuard. The vulnerability is known to impact most Linux distributions and Unix-like operating systems including FreeBSD, OpenBSD, macOS, iOS, and Android. A currently incomplete list of vulnerable operating systems and the init systems they came with is available below, with more to be added once they are tested and found to be affected: Ubuntu 19.10 (systemd), Fedora (systemd), Debian 10.2 (systemd), Arch 2019.05 (systemd), Manjaro 18.1.1 (systemd), Devuan (sysV init), MX Linux 19 (Mepis+antiX), Void Linux (runit), Slackware 14.2 (rc.d), Deepin (rc.d), FreeBSD (rc.d), and OpenBSD (rc.d).

This security flaw “allows a network adjacent attacker to determine if another user is connected to a VPN, the virtual IP address they have been assigned by the VPN server, and whether or not there is an active connection to a given website,” according to William J. Tolley, Beau Kujath, and Jedidiah R. Crandall, Breakpointing Bad researchers at University of New Mexico. “Additionally, we are able to determine the exact seq and ack numbers by counting encrypted packets and/or examining their size. This allows us to inject data into the TCP stream and hijack connections,” the researchers said.

Read more of this story at Slashdot.

full article

Nov 29

Chinese regulators have announced new rules governing video and audio content online, including a ban on the publishing and distribution of “fake news” created with technologies such as artificial intelligence and virtual reality. From a report: Any use of AI or virtual reality also needs to be clearly marked in a prominent manner and failure to follow the rules could be considered a criminal offense, the Cyberspace Administration of China (CAC) said on its website. The rules, effective Jan. 1, were published publicly on its website on Friday after being issued to online video and audio service providers last week. In particular, the CAC highlighted potential problems caused by deepfake technology, which uses AI to create hyper-realistic videos where a person appears to say or do something they did not. Deepfake technology could “endanger national security, disrupt social stability, disrupt social order and infringe upon the legitimate rights and interests of others,” according to a transcript of a press briefing published on the CAC’s website.

Read more of this story at Slashdot.

full article

Nov 26

In a Quartz article, Adam Epstein writes about the filmmaking technology used to film The Mandalorian on Disney+: Industrial Light & Magic (ILM) — the Lucasfilm subsidiary George Lucas founded in 1975 to make the visual effects for Star Wars — deployed a real-time 3D projection system called “Stagecraft” on the Disney+ series that could, eventually, replace green-screen as the film industry standard for rendering virtual environments. The company has been testing Stagecraft for five years — most recently on the Star Wars spin-off movie Solo in 2018. But The Mandalorian, the flagship series on Disney’s new streaming service, likely marks the most extensive use yet of the new system.

Stagecraft’s chief innovation is that it can project a 3D visual environment around the actors that changes in real time to match the perspective of the camera. When the camera moves, the background moves too, simulating the experience of filming in a different location. It’s a significant upgrade from green-screen technology, which requires the filmmakers layer in a static image or footage after filming in front of the blank backdrop. […] The tech has a wide range of benefits. For starters, it can draw better performances from the actors, who don’t have to imagine the environment they are in, as they do when filming in front of green-screen. They can instantly be transported to any location, real or made-up, and feel as though they are there. And that’s another big advantage: Stagecraft allows films and TV shows to simulate environments without actually having to send an entire production there to film. “One downside is that the displays used in Stagecraft require liquid crystals that take several years to grow,” the report adds. “Growing and maintaining these crystals, which are the backbone of LCD (liquid crystal display) screens, can be expensive and time-consuming, perhaps complicating the attempts of other companies to adapt the technology.”

This video from Unreal Engine shows a smaller scale version of the tech in action.

Read more of this story at Slashdot.

full article

Nov 25

Amazon’s annual AWS re:Invent conference in Las Vegas — where the tech giant reliably announces a host of products heading to Amazon Web Services, its cloud platform — doesn’t kick off officially until next week. But that didn’t stop the tech giant from previewing a few of the highlights, the bulk of which relate to the internet of things (IoT). From a report: Why the investment in IoT? Perhaps because AWS maintains pole position in the segment, which is anticipated to be worth $212 billion by the end of 2019. Amazon CTO Werner Vogels told VentureBeat in a recent interview that AWS customers deploy upwards of hundreds of thousands of sensors. First on the list was Alexa Voice Service (AVS) Integration for AWS IoT Core, the managed cloud service that lets gadgets interact with cloud apps and other devices. It’s designed to let manufacturers create Alexa built-in devices — or accessories that connect to Alexa to play music, control smart home devices, and more — with constrained hardware resources. Alexa built-in devices previously required at least 100MB of RAM and ARM Cortex A-class microprocessors, but thanks to new AWS cloud processing components that offload tasks like buffering and mixing audio, the baseline requirement has been reduced to 1MB of RAM and Arm Cortex M-class microcontrollers. Alexa Voice Service (AVS) Integration for AWS IoT Core specifically offloads media retrieval, audio decoding, audio mixing, and state management to a new virtual Alexa-built in device in the cloud. New AWS IoT-reserved MQTT topics allow for message transfer between devices connected to AWS IoT Core and AVS using the MQTT protocol.

Read more of this story at Slashdot.

full article

Nov 24

An anonymous reader quotes the International Business Times:

At the recent Tianfu cup held in Chengdu, China, Chinese China’s top white-hat hackers have converged to test zero-days against top software available in the market today. During the first day of the event, Chinese security researchers were able to break into major browsers such as Safari, Microsoft Edge, and Google Chrome.

Since March 2018, the Chinese government has officially discouraged security researchers from joining hacking competitions outside the county. The recent Tianfu Cup is the venue for hackers to showcase their skills and even earn six-figure bounties for successful exploits. Former Pwn2Own winner Team 360 Vulcan took home $382,500 for successfully hacking the old version of Office 365, Microsoft Edge, Adobe PDF Reader, VMWare Workstation, and gemu+ Ubuntu during the two days event, reports ZDNet… Search engine giant Google has a representative in the event with some members of the Google Chrome security team present on site. Organizers plan to submit a report of all bugs uncovered during the event to all vendors when the competition concludes, says ZDNet.

Read more of this story at Slashdot.

full article

Nov 21

An anonymous reader quotes a report from Ars Technica: After a tease earlier this week, Valve has revealed more details and a new trailer for the first new Half-Life content in over a decade. The “full-length” Half Life: Alyx will hit Steam in March 2020, Valve says, with support for “all PC-based VR headsets.” Pre-orders are already available for $59.99, though the game will be free if you own a Valve Index headset. The game, which Valve says is “set between the events of Half-Life and Half-Life 2,” has been “designed from the ground up for Virtual Reality” (i.e. you can stop hoping for a 2D monitor release). “Everyone at Valve is excited to be returning to the world of Half-Life,” Valve founder Gabe Newell said in a statement. “VR has energized us.”

Today’s video trailer shows that next year’s Alyx-ization of Half-Life is equal parts abstract and concrete. The VR perspective from today’s trailer doesn’t include any floating body parts or feet; the only part of your virtual self you’ll see, at least in today’s trailer, is your hands, covered in a pair of gloves. Yet we also hear Alyx’s voice, which indicates that this game’s protagonist won’t be nearly as silent as Freeman in his own mainline adventures. Today’s announcement includes video footage that confirms a data-leak examination by Valve News Network earlier this year: a new manipulation system dubbed the Gravity Gloves. And boy do these things look cool. Need to grab or pick something up? Point at whatever that object is (whether it’s close or far away) with an open hand until it glows orange, then close your hand and flick your wrist toward yourself to fling the item in your direction. At this point, you get a moment to physically “catch” the object in question. Point, clench, flick, catch.

Today’s trailer also confirms bits and pieces of the exciting HLA details I’ve previously heard about from multiple sources. For instance, the trailer includes teases of the game’s approach to VR-exclusive puzzles, particularly those that require moving hands around a three-dimensional space. Some of these puzzles will require scanning and finding clues hidden inside of the virtual world’s walls (and moving or knocking down anything hindering your ability to see or touch said walls). Other puzzles will require arranging what look like constellations or grids of stars around a 3D space in order to match certain patterns. And then there’s the matter of familiar Half-Life creatures coming to life for the first time in over 12 years, which means they’re that much more detailed and gruesome as rendered in the Source 2 engine. The Half-Life website specifies that this game can be played sitting, standing, or with “roomscale” movement. Players can use finger-tracking or trigger-based VR controllers and move around the VR environments by “teleporting” from point A to point B, “shifting” smoothly to a new position, or just walking continuously with an analog stick.

Read more of this story at Slashdot.

full article

Nov 20

Yesterday, Valve announced Half-Life: Alyx, the first new game in the acclaimed Half-Life series in well over a decade. And unlike the previous Half-Life installments, this game will be playable exclusively in virtual reality. The Verge reports: We don’t currently have any details beyond the tweet from Valve above, which appears to be the first tweet from a new, Twitter-verified Valve Software account established in June. But clearly, we’ll be learning more on Thursday, presumably from this social media account, at 10am PT. Despite being some of the most influential and critically acclaimed PC games ever made, Valve has famously never finished either of its Half-Life supposed trilogies of games. After Half-Life and Half-Life 2, the company created Half-Life: Episode 1 and Half-Life: Episode 2, but no third game in the series. The closest we’ve come to knowing anything about where Half-Life was headed was this thinly veiled fanfic from former Valve writer Marc Laidlaw.

Read more of this story at Slashdot.

full article

Nov 19

Xiaomi today unveiled a new iteration of its virtual assistant Xiao Ai and shared a new feature of Android-based MIUI operating system as the publicly listed Chinese technology group pushes to expand its internet services ecosystem. From a report: At its annual Mi Developer conference in Beijing, the company said it is integrating an earthquake warning function into MIUI for select users in China, with plans to expand it nationwide soon. The integration, touted as the first of its kind globally, will enable alerts to be sent to smartphones running MIUI 11 and Mi TV “seconds to tens of seconds” before the quake waves arrive, Xiaomi said. The feature, which was first tested in September this year, has been developed in partnership with Institute of Care-life, a Chengdu-based organization focusing on natural disaster warning. Xiaomi said it has activated the feature for the earthquake-prone Sichuan Province and plans to expand it elsewhere in the nation soon. Wang Tun, head of the institute, said this function, unlike those available through apps in some countries, works more efficiently and does not rely on a working internet connection.

Read more of this story at Slashdot.

full article

«     |     ?     |     »