Ikea and Sonos are partnering on a a new range of connected speakers that will be available in August 2019. The aren’t just cheap Ikea speakers with a Sonos logo. You’ll be able to control the speakers from the Sonos app just like a normal Sonos product. Ikea and Sonos showcased two different models for now — a bookshelf speaker that will cost $99 and a table lamp speaker that will retail for $179. They will available in black and white. The idea is to hide those speakers in shelves and lamps so that you’re surrounded by speakers without even noticing them. You can use the bookshelf speaker horizontally or vertically. But you can also mount a speaker on an Ikea Kungfors rack. It can act as a standalone shelf if you want to put a plant or some decoration on top of it. The table lamp is quite straightforward. This object combines both light and sound. It looks like an Amazon Echo Plus or an Apple HomePod with a lamp on top. If you live in a tiny apartment, you could save some valuable space by replacing two objects with one. The best part is that those new speakers will integrate with other Sonos speakers just like any Sonos product. For instance, you can pair two speakers to create stereo separation or pair them with a Sonos Beam to create a good sound system for your TV. If you wanted to add a Sonos speaker in your bathroom but didn’t want to spend $200 on a Sonos One, you could also consider a bookshelf speaker to hide in a corner. It might not be as powerful as a Sonos One, but customers will benefit from more options. The Symfonisk line connects to your Wi-Fi network. This way, you can use the normal Sonos app, control music from Spotify’s app using Spotify Connect and send music to your speakers with AirPlay 2. Today’s new speakers don’t have any microphone. So you won’t be able to control your music with Amazon Alexa directly.
The sci-fi blockbuster Westworld has been an inspiring look into what humanlike robots can do for us in the meatspace. While current technologies are to make Westworld a reality, startups are attempting to replicate the sort of human-robot interaction it presents in virtual space. , which just graduated from Y Combinator and ranked among TechCrunch’s from the batch, is one of them. The “Westworld” in the TV series, a far-future theme park staffed by highly convincing androids, lets visitors live out their heroic and sadistic fantasies free of consequences. There are a few reasons why rct studio, which is keeping mum about the meaning of its deliberately lower-cased name for later revelation, is going for the computer-generated world. Besides the technical challenge, playing a fictional universe out virtually does away the geographic constraint. The Westworld experience, in contrast, happens within a confined, meticulously built park. “Westworld is built in a physical world. I think in this age and time, that’s not what we want to get into,” Xinjie Ma, who heads up marketing for rct, told TechCrunch. “Doing it in the physical environment is too hard, but we can build a virtual world that’s completely under control.” Rct studio wants to build the Westworld experience in virtual worlds. / Image: rct studio The startup appears suitable to undertake the task. The eight-people team is led by Cheng Lyu, the 29-year-old entrepreneur who goes by Jesse and helped Baidu build up from scratch after the Chinese search giant Along with several of Raven’s core members, Lyu left Baidu in 2018 to start rct. “We appreciate a lot the support and opportunities given by Baidu and during the years we have grown up dramatically,” said Ma, who previously oversaw marketing at Raven. Let AI write the script Immersive films, or games, depending on how one wants to classify the emerging field, are already available with pre-written scripts for users to pick from. Rct wants to take the experience to the next level by recruiting artificial intelligence for screenwriting. At the center of the project is the company’s proprietary engine, Morpheus. Rct feeds it mountains of data based on human-written storylines so the characters it powers know how to adapt to situations in real time. When the codes are sophisticated enough, rct hopes the engine can self-learn and formulate its own ideas. “It takes an enormous amount of time and effort for humans to come up with a story logic. With machines, we can quickly produce an infinite number of narrative choices,” said Ma. To venture through rct’s immersive worlds, users wear a virtual reality headset and control their simulated self via voice. The choice of audio came as a natural step given the team’s experience with natural language processing, but the startup also welcomes the chance to develop new devices for more lifelike journeys. “It’s sort of like how the film Ready Player One built its own gadgets for the virtual world. Or Apple, which designs its own devices to carry out superior software experience,” explained Ma. On the creative front, rct believes Morpheus could be a productivity tool for filmmakers as it can take a story arc and dissect it into a decision-making tree within seconds. The engine can also render text to 3D images, so when a filmmaker inputs the text “the man throws the cup to the desk behind the sofa,” the computer can instantly produce the corresponding animation. Path to monetization Investors are buying into rct’s offering. The startup is about to close its Series A funding round just months after banking seed money from and Chinese venture capital firm , the startup told TechCrunch. The company has a few imminent tasks before achieving its Westworld dream. For one, it needs a lot of technical talent to train Morpheus with screenplay data. No one on the team had experience in filmmaking, so it’s on the lookout for a creative head who appreciates AI’s application in films. Rct studio’s software takes a story arc and dissects it into a decision-making tree within seconds. / Image: rct studio “Not all filmmakers we approach like what we do, which is understandable because it’s a very mature industry, while others get excited about tech’s possibility,” said Ma. The startup’s entry into the fictional world was less about a passion for films than an imperative to shake up a traditional space with AI. Smart speakers were its first foray, but making changes to tangible objects that people are already accustomed to proved challenging. There has but they are far from achieving ubiquity. Then movies crossed the team’s mind. “There are two main routes to make use of AI. One is to target a vertical sector, like cars and speakers, but these things have physical constraints. The other application, like Go, largely exists in the lab. We wanted something that’s both free of physical limitation and holds commercial potential.” The Beijing and Los Angeles-based startup isn’t content with just making the software. Eventually, it wants to release its own films. The company has inked a long-term partnership with , a Chinese sci-fi publisher representing about 200 writers, including the Hugo award-winning Cixin Liu. The pair is expected to start co-producing interactive films within a year. Rct’s path is reminiscent of a giant that precedes it: . The Chinese company didn’t exactly look to the California-based studio for inspiration, but the analog was a useful shortcut to pitch to investors. “A confident company doesn’t really draw parallels with others, but we do share similarities to Pixar, which also started as a tech company, publishes its own films, and has built its ,” said Ma. “A lot of studios are asking how much we price our engine at, but we are targeting the consumer market. Making our own films carry so many more possibilities than simply selling a piece of software.”
U.S. smart speaker owners grew 40 percent over 2018 to now reach 66.4 million — or 26.2 percent of the U.S. adult population — according to released this week, which detailed adoption patterns and device market share. The report also reconfirmed Amazon Echo’s lead, noting the Alexa-powered smart speaker grew to a 61 percent market share by the end of last year — well above Google Home’s 24 percent share. These findings fall roughly in line with other analysts’ reports on smart speaker market share in the U.S. However, because of varying methodology, they don’t all come back with the exact same numbers. For example, in December 2018, the Echo had accounted for nearly 67 percent of all U.S. smart speaker sales in 2018. Meanwhile, , with a 70 percent share of the installed base in the U.S. Though the percentages differ, the overall trend is that Amazon Echo remains the smart speaker to beat. While on the face of things this appears to be great news for Amazon, did note that Google Home has been closing the gap with Echo in recent months. Amazon Echo’s share dropped nearly 11 percent over 2018, while Google Home made up for just over half that decline with a 5.5 percent gain, and “other” devices making up the rest. This latter category, which includes devices like Apple’s HomePod and Sonos One, grew last year to now account for 15 percent of the market. That said, the has Alexa built-in, so it may not be as bad for Amazon as the numbers alone seem to indicate. After all, Amazon is selling its Echo devices at cost or even a loss to snag more market share. The real value over time will be in controlling the ecosystem. The growth in smart speakers is part of a larger trend toward voice computing and smart voice assistants — like Siri, Bixby and Google Assistant — which are often accessed on smartphones. A related report from Juniper Research last month estimated there will be , up from the 2.5 billion in use at the end of 2018. This is due to the increased use of smartphone assistants as well as the smart speaker trend, the firm said. Voicebot’s report also saw how being able to access voice assistance on multiple platforms was helping to boost usage numbers. It found that smart speaker owners used their smartphone’s voice assistant more than those who didn’t have a smart speaker in their home. It seems consumers get used to being able to access their voice assistants across platforms — now that Siri has made the jump to speakers and Alexa to phones, for instance. The full report is available on Voicebot.ai’s website .
If you’re craving a truly different sound with which to slay the crew this weekend, look no further than — though you may have to drag your old 486 out of storage to play it. Yes, this album runs in MS-DOS and its music is produced entirely through the PC speaker — you know, the one that can only beep. Now, chiptunes aren’t anything new. But the more popular ones tend to imitate the sounds found in classic computers and consoles like the Amiga and SNES. It’s just limiting enough to make it fun, and of course many of us have a lot of nostalgia for the music from that period. ( still gives me chills.) But fewer among us look back fondly on the days before sample-based digital music, before even decent sound cards let games have meaningful polyphony and such. The days when the only thing your computer could do was beep, and when it did, you were scared. , a programmer and musician who’s been doing “retro” sound since before it was retro, took it upon himself to make some music for this extremely limited audio platform. Originally he was just planning on making a couple of tunes for a game project, but , he explains that it ended up ballooning as he got into the tech. “A few songs became a few dozens, collection of random songs evolved into conceptualized album, plans has been changing, deadlines postponing. It ended up to be almost 1.5 years to finish the project,” he writes (I’ve left his English as I found it, because I like it). Obviously the speaker can do more than just “beep,” though indeed it was originally meant as the most elementary auditory feedback for early PCs. In fact, the tiny loudspeaker is capable of a range of sounds and can be updated 120 times per second, but in true monophonic style can only produce a single tone at a time between 100 and 2,000 Hz, and that in a square wave. Inspired by games of the era that employed a variety of tricks to create the illusion of multiple instruments and drums that in fact never actually overlap one another, he produced a whole album of tracks; I think “Pixel Rain” is my favorite, but “Head Step” is pretty dope too. You can of course listen to it online or as MP3s or whatever, but the entire thing fits into a 42 kilobyte MS-DOS program you can download . You’ll need an actual DOS machine or emulator to run it, naturally. How was he able to do this with such limited tools? Again I direct you to his , where he describes, for instance, how to create the impression of different kinds of drums when the hardware is incapable of the white noise usually used to create them (and if it could, it would be unable to layer it over a tone). It’s a fun read and the music is… well, it’s an acquired taste, but it’s original and weird. And it’s Friday.