Majd Bakar, vice president and head of engineering for Google’s Stadia game-streaming service, explains how it built the infrastructure needed to power that service. (GeekWire screenshot) After spending billions over the last few years to upgrade its cloud computing infrastructure, Google thinks it is ready to tackle the needs of some of the most demanding customers on the planet. Tuesday at the Game Developers Conference in San Francisco, powered by its massive computing network that works across mobile devices, PCs, and televisions by the end of the year. One of the biggest factors that will dictate the success or failure of Stadia will be the number of times users abandon those games in frustration after encountering glitches, crashes, or delays that have plagued earlier attempts at video-game streaming. “This architecture is the foundation for the new generation of gaming,” said Majd Bakar, vice president at Google and head of engineering for Stadia, during the presentation. Google is hardly the first company to pursue video-game streaming, but it will be the first of the big three cloud companies to ship a service designed to stream the most demanding console games, assuming everything remains on course. later this year, and while Amazon Web Services provides , it doesn’t have a consumer-facing service that is capable of streaming top-tier console games like Assassin’s Creed or Doom to browsers. Stadia will stream games in 4K image quality at up to 60 frames per second, Baker said, which should be enough to satisfy gamers who expect smooth performance from even the biggest games. And it promised developers it would increase that performance over time: “the processing resources will scale up to match your imagination,” Bakar said. To make this all work, Google designed a custom graphics-processing unit (GPU) and server processor with AMD. The GPU will provide 10.7 teraflops of performance that will be matched with a 2.7GHz x86 server processor and 16MB of memory in a Stadia instance. Each Stadia instance is far more powerful than the current generation of video-game consoles, but Google must cope with the latency introduced by moving data over a network. (GeekWire Screenshot) Google also touted the power of its global fiber network, which , as an edge for Stadia, given that most game traffic will flow over its private network as opposed to the public internet. Over the years it has constructed that network to power Google search and ads, the company has amassed 100s of “points of presence” around the globe where users can tap into that network as well as 7,500 edge processing nodes that can handle , Bakar said. Assuming game developers commit to releasing top-tier games for the service, Google will likely have a first-mover advantage as video games inevitably shift away from expensive consoles and boxed CDs to streaming services. However, both AWS and are to meet the needs of a top-tier game streaming service, and the cloud market leaders also maintain sprawling private fiber networks that handle customer traffic. , Google could actually have a first-mover disadvantage if substandard U.S. broadband networks cause a poor gaming experience; it’s hard for consumers to know whether to blame the content provider or their ISP in those situations. While Google’s private fiber network is easily one of the best in the world, it still has that “last-mile” problem of actually delivering those bits into your living room. And AWS could be in a very interesting long-term position in game streaming thanks to Twitch, its corporate sibling. The popular service lets gamers stream their console or PC gameplay to audiences across the world, and organized competitions with massive streaming audiences are starting to become mainstream entertainment. Still, Google is very motivated to carve out its own sections of the cloud computing market as it looks up at AWS and Microsoft. The arrival of workable game streaming could generate tons of business from consumers and developers (pricing details were not released Monday), and would go a long way toward , much of which went toward servers and other cloud computing equipment.
isn’t launching a gaming console. The company is launching a service instead, . You’ll be able to run a game on a server and stream the video feed to your device. You won’t need to buy new hardware to access Stadia, but Stadia won’t be available on all devices from day one. “With Google, your games will be immediately discoverable by 2 billion people on a Chrome browser, Chromebook, Chromecast, Pixel device. And we have plans to support more browsers and platforms over time,” Google CEO Sundar Pichai said shortly after opening the conference. As you can see, the Chrome browser will be the main interface to access the service on a laptop or desktop computer. The company says that you’ll be able to play with your existing controller. So if you have a PlayStation 4, Xbox One or Nintendo Switch controller, that should work just fine. Google is also . As expected, if you’re using a Chromecast with your TV, you’ll be able to turn it into a Stadia machine. Only the supports Bluetooth, so let’s see if you’ll need a recent model to play with your existing controller. Google’s controller uses Wi-Fi so that should theoretically work with older Chromecast models. On mobile, it sounds like Google isn’t going to roll out its service to all Android devices from day one. Stadia could be limited to Pixel phones and tablets at first. But there’s no reason Google would not ship Stadia to all Android devices later. Interestingly, Google didn’t mention Apple devices at all. So if you have an iPhone or an iPad, don’t hold your breath. Apple doesn’t let third-party developers sell digital content in their apps without going through the App Store. This will create a challenge for Google. Stadia isn’t available just yet. It’ll launch later this year. As you can see, there are many outstanding questions after the conference. Google is entering a new industry and it’s going to take some time to figure out the business model and the distribution model.
Onstage at CEO Sundar Pichai announced the company’s latest big initiative, taking on the entire gaming industry with a live-streaming service called Stadia. The service will let gamers leave their hefty GPUs and expensive systems behind. Pichai says that the service can be used on devices with a chrome browser and an internet connection. To Google that means Stadia will launch on desktops, laptops, TVs, tablets and phones. The service will work across platforms so you won’t just be competing with other Stadia users. Google working on new gaming efforts here isn’t exactly a surprise. Last fall, the company launched a pilot program of sorts with Project Stream, allowing gamers to stream gameplay of Assassins Creed Odyssey in their internet browser at 1080p in 60fps. At launch Stadia will support 4K at 60fps with surround sound and HDR. They say they are also working on 8K 120fps support in “the future.” The stat we’re waiting to hear about is latency and what sort of ranges the service has been hitting ahead of launch. The company showed off a dedicated Stadia controller though you’ll also be able to use your existing third-party controllers or keyboard and mouse. Speaking of hardware, Google has partnered with AMD to create a custom high-end GPU for Stadia that the company says pushes more than 10.7 teraflops, dwarfing the GPU output of both the Xbox One X and PS4 Pro. When it comes to gaming, Google is an underdog here, though the company obviously has a massive mobile gaming platform with Android. When it comes to desktop gaming, the tech giant doesn’t have a ton of background aside from their sporadic efforts on PC virtual reality. One would imagine that Microsoft or Valve are the best positioned here, but Google has some pretty heavy mindshare with YouTube Gaming and some pretty heavy infrastructure with Google Cloud. Viewers will be able to move from YouTube directly into gameplay without any downloads just by clicking a “Play Now” button. Google says this process can take as little as 5 seconds. Google certainly has ample reason to want gamers to move away from Windows PCs to systems with more lightweight onboard compute. The idea of running something heavier than minesweeper-equivalents on a Chromebook can be pretty interesting, the idea of doing that across all of your devices could be game-changing. Updating
Google unveiled a new gaming streaming service called Stadia on Tuesday morning, looking to shake up the video game world by leveraging its experience in cloud technology, and posing a new threat to fellow tech giants that operate the dominant gaming platforms. The search giant’s announcement promises to accelerate the industry’s evolution away from high-end hardware in the living room and toward streaming technology in the cloud. The move will be closely watched by existing game platform providers such as Nintendo, Microsoft, Valve, Sony and Apple, some of whom are or their own streaming services. Google is unveiling its plans this morning at GDC, the Game Developers Conference, starting at 10 a.m. Watch the live stream above and stay tuned for updates. The company also unveiled a Stadia controller with a dedicated button for sharing and saving gameplay on YouTube, and another button to get help from Google Assistant, using a built-in microphone. The connection to Google’s dominant video platform illustrates the potential threat to Amazon’s Twitch. Google gaming exec Phil Harrison shows the new Stadia controller. (Screenshot via YouTube.) The search giant previewed its gaming ambitions with Project Stream, a test that allowed gamers to play a streamed version of Assassin’s Creed Odyssey in Chrome web browsers, with the game streamed from a Google data center rather than running on the user’s hardware. “Internally, we were actually testing our ability to stream high fidelity graphics over a low latency network,” said Sundar Pichai, Google’s CEO, in his opening remarks at the event this morning. “We learned that we could bring a Triple-A game to any device with a Chrome browser and an Internet connection, using the best of Google to create a powerful game platform.” Sundar Pichai unveils Google’s plans. (Image via live stream.) Phil Harrison, the former Microsoft Xbox and Sony PlayStation executive who now leads Google’s gaming initiatives, said the company was able to stream games at 1080p and 60 frames per second in that test. “We will be handing that extraordinary power of the data center to you, the game developers.” Previewing the features of the Stadia service, Harrison showed the ability to jump directly into a game from a YouTube trailer, without any download required. “This new generation of gaming is not a box,” he said. “With Stadia, the data center is your platform. There is no console that limits the developer’s creative ideas, and no console that limits where gamers can play.” As the operator of a large-scale cloud platform, Google is in a unique position to launch a streaming service. , “It’s not a new technology, but past stabs at it have fizzled mostly because of latency issues, a problem that Google’s decision-makers think they can solve thanks to the data centers they’ve got all around the world.” Developing story, refresh for updates.
The eyes of the video game world are on Google this morning, amid widespread speculation that the tech giant is preparing to unveil a new gaming service. Google’s expected announcement promises to accelerate the industry’s evolution away from high-end hardware in the living room and toward streaming technology in the cloud. The move will be closely watched by existing game platform providers such as Nintendo, Microsoft, Valve, Sony and Apple, some of whom are or their own streaming services. The search giant previewed its gaming ambitions with Project Stream, a test that allowed gamers to play a streamed version of Assassin’s Creed Odyssey in Chrome web browsers, with the game streamed from a Google data center rather than running on the user’s hardware. Google will unveil its plans this morning at GDC, the Game Developers Conference. The company will deliver the keynote address starting at 10 a.m. this morning. As the operator of a large-scale cloud platform, Google is in a unique position to launch a streaming service. , “It’s not a new technology, but past stabs at it have fizzled mostly because of latency issues, a problem that Google’s decision-makers think they can solve thanks to the data centers they’ve got all around the world.” Watch the live stream above and stay tuned for updates.
Angry Birds: Isle of Pigs took the scenic route to the iPhone. Rovio began flirting with augmented reality, releasing First Person Slingshot for the headset last year. Last month, Angry Birds VR: Isle of Pigs hit Steam for the HTC Vive and Oculus Rift. Now, it seems, the AR version of the title is finally ready for a mainstream audience, as Rovio preps the game for a spring release on iOS. The game, developed by Swedish company Resolution Games, builds on the learnings from its predecessors, creating a customized version of the game for the mobile form factor. Sami Ronkainen, Rovio’s creative director, Extended Reality, told TechCrunch that the more mainstream version of the title borrows some levels from the earlier versions, while introducing a number of originals. “What we did with the Magic Leap was we wanted to start with something that’s fully immersive that can make use of the 3D space around you,” he says. “We realized that it’s going to be further in the future, so we decided to go with platforms that are much closer to customers today.” Of course, an imminent release for iOS marks far and away the largest potential audience the game has seen to date. But Rovio, notably, doesn’t stray too far from its roots here. While the title adopts a first-person view to make the most of the augmented reality experience, it’s the same game at its core as the one that debuted on the iPhone 10 years ago this December. You slingshot irritable avians into weak points of compromised structures in order to take down enemy pigs. You know the deal. They made a whole feature-length film about it (with a sequel on the way this summer). When you fire up the game, it will detect its environment for a suitable surface and start building structures on top of it once found. The environment and characters are brought to life in compelling ways, interacting with you as you move around. In fact, moving is a big part of the game. This isn’t one you’re going to want to play on the subway — it requires getting creative about the angles you adopt to fell the structures. Rovio doesn’t have a dedicated AR team, instead partnering with Swedish developer Resolution to offer a fuller experience from the ground up. “There are games where the AR seems like a bit of an add-on,” says Ronkainen. “We wanted to explore this from the angle of building a game that really makes use of the space. You can either build a team of your own and make your own mistakes, or you can partner with the best people who have a good track record of building AR games.” You can either build a team of your own and make your own mistakes, or you can partner with the best people who have a good track record of building AR games. I had a little hands-on time with the game earlier this month, and was impressed with the little touches throughout, from the snowfall in certain levels, to the pigs’ snorts as you come near. And certainly the new angle adds a different dimension to a game that’s frankly been run into the ground after years of sequels and spin-offs. How long it will maintain that fresh outlook is another question entirely. So too is the question of how much users will want to engage by moving around over the long run. Ditto for the question of interacting with an AR game through a mobile device, rather than a headset. It’s clear that one of the reasons Rovio chose headsets first is that they’re simply a more natural method for interacting with a title like this. What iOS does represent, however, is a way to bring the experience to the masses. For Apple, meanwhile, the casual game represents the potential to bring the ARKit experience to a much broader user base. Apple really needs one or two titles to showcase augmented reality on the mobile platform for mainstream users, and Angry Birds has both the name recognition and simple gameplay to do just that. The title is available for pre-order starting today. It will arrive at some point in “late-spring.” It’s free to play, but will likely feature some manner of in-game purchasing — though the company tells me it hasn’t “locked down a business model” just yet.
A for China’s gaming market hasn’t held back newcomers. Bytedance, the world’s behind a collection of rising new media apps including TikTok and Jinri Toutiao, is making a further push into video games after it took control of a mobile game developer through a roundabout deal. According to a , Shanghai Mokun has become wholly owned by Beijing Zhaoxi Guangnian, a second-tier subsidiary of Bytedance. Mokun is a mobile game developer previously owned by a publicly listed games publisher that earmarked in revenue last year, which makes the Shanghai-based company about one-sixth the size of Activision Blizzard. Zhang Lidong, a veteran journalist-turned senior vice president at Bytedance, has taken the helm as Mokun’s legal representative. The price of the deal is undisclosed. A spokesperson from Bytedance declines to comment on the transaction. TechCrunch has reached out to 37 IE and will update the story if we hear back. This isn’t the first time Bytedance has shown interest in the lucrative gaming market. Last month, Chinese version Douyin released its first in-app “mini-game” and Toutiao had already rolled out on its personalized news distribution platform in September. These stripped-down forms of apps within a super app have been a sought-after way for Chinese tech giants to lock users in rather than sending them to download a stand-alone app. Bytedance’s foray into mini-games comes as a likely move to take on Tencent’s messenger, which had amassed on its own army of mini-games by January. On the other hand, Tencent is getting nervous about ByteDance’s rise and made after trying its hand at several TikTok-like apps. Though best-known for WeChat, Tencent has been generating the bulk of its income from video games for years and is the world’s largest games publisher by revenues, according to market researcher . Tencent’s asset of more than 1 billion MAUs on WeChat and about 800 million MAUs on QQ, its legacy messenger from the PC era, allows the giant to conveniently convert social media users into gamers. Users can, for instance, easily log in and invite friends to play games via their WeChat or QQ accounts. By comparison, stream short-form videos on Douyin each month. Many of them may have already seen in-stream ads for games on the video app, which has become a popular marketing channel for small game developers, according to several media-buying agencies TechCrunch previously spoke to. Worldwide, TikTok has collected an estimated . This considerable global reach, which Tencent lacks, may eventually give Bytedance an edge in games distribution if the company decides to launch the effort overseas.
is holding a press event at the Game Developers Conference today in San Francisco. The conference starts at 10 AM Pacific Time, 1 PM Eastern Time, 5 PM in London and 6 PM in Paris. While many game companies rely on Google Cloud Platform for their server and infrastructure needs, today’s conference is going to be different. The company has been working on something called Project Stream for more than six months. In its initial , the company let you play Assassin’s Creed Odyssey in your Chrome web browser. The game would run on a server in a data center near you, and you’d see the video stream in your browser and interact with your character from your computer. And it sounds like Google is ready to launch its cloud gaming service for real. So let’s see how Google plans to sell this service and the initial game lineup.
Rizwan Virk Contributor Rizwan Virk is executive director of , a serial entrepreneur and author. Released this month 20 years ago, “The Matrix” went on to become a cultural phenomenon. This wasn’t just because of its ground-breaking special effects, but because it popularized an idea that has come to be known as the simulation hypothesis. This is the idea that the world we see around us may not be the “real world” at all, but a high-resolution simulation, much like a video game. While the central question raised by “The Matrix” sounds like science fiction, it is now debated seriously by scientists, technologists and philosophers around the world. Elon Musk is among those; he thinks the odds that we are in a simulation are a billion to one (in favor of being inside a video-game world)! As a founder and investor in many video game startups, I started to think about this question seriously after seeing how far virtual reality has come in creating immersive experiences. In this article we look at the development of video game technology past and future to ask the question: Could a simulation like that in “The Matrix” actually be built? And if so, what would it take? What we’re really asking is how far away we are from The Simulation Point, the theoretical point at which a technological civilization would be capable of building a simulation that was indistinguishable from “physical reality.” [Editor’s note: This article summarizes one section of the upcoming book, ““] From science fiction to science? But first, let’s back up. “The Matrix,” you’ll recall, starred Keanu Reeves as Neo, a hacker who encounters enigmatic references to something called the Matrix online. This leads him to the mysterious Morpheus (played by Laurence Fishburne, and aptly named after the Greek god of dreams) and his team. When Neo asks Morpheus about the Matrix, Morpheus responds with what has become one of the most famous movie lines of all time: “Unfortunately, no one can be told what The Matrix is. You’ll have to see it for yourself.” Even if you haven’t seen “The Matrix,” you’ve probably heard what happens next — in perhaps its most iconic scene, Morpheus gives Neo a choice: Take the “red pill” to wake up and see what the Matrix really is, or take the “blue pill” and keep living his life. Neo takes the red pill and “wakes up” in the real world to find that what he thought was real was actually an intricately constructed computer simulation — basically an ultra-realistic video game! Neo and other humans are actually living in pods, jacked into the system via a cord into his cerebral cortex. Who created the Matrix and why are humans plugged into it at birth? In the two sequels, “The Matrix Reloaded” and “The Matrix Revolutions,” we find out that Earth has been taken over by a race of super-intelligent machines that need the electricity from human brains. The humans are kept occupied, docile and none the wiser thanks to their all-encompassing link to the Matrix! But the Matrix wasn’t all philosophy and no action; there were plenty of eye-popping special effects during the fight scenes. Some of these now have their own name in the entertainment and video game industry, such as the famous “bullet time.” When a bullet is shot at Neo, the visuals slow down time and manipulate space; the camera moves in a circular motion while the bullet is frozen in the air. In the context of a 3D computer world, this make perfect sense, though now the camera technique is used in both live action and video games. AI plays a big role too: in the sequels, we find out much more about the agents pursuing Neo, Morpheus and the team. Agent Smith (played brilliantly by Hugo Weaving), the main adversary in the first movie, is really a computer agent — an artificial intelligence meant to keep order in the simulation. Like any good AI villain, Agent Smith (who was voted the 84th most popular movie character of all time!) is able to reproduce itself and overlay himself onto any part of the simulation. “The Matrix” storyboard from the original movie. (Photo by Jonathan Leibson/Getty Images for Warner Bros. Studio Tour Hollywood) The Wachowskis, creators of “The Matrix,” claim to have been inspired by, among others, science fiction master Philip K. Dick. Most of us are familiar with Dick’s work from the many film and TV adaptations, ranging from Blade Runner, Total Recall and the more recent Amazon show, The Man in the High Castle. Dick often explored questions of what was “real” versus “fake” in his vast body of work. These are some of the same themes we will have to grapple with to build a real Matrix: AI that is indistinguishable from humans, implanting false memories and broadcasting directly into the mind. As part of writing my upcoming book, I interviewed Dick’s wife, Tessa B. Dick, and she told me that Philip K. Dick actually believed we were living in a simulation. He believed that someone was changing the parameters of the simulation, and most of us were unaware that this was going on. This was of course, the theme of his short story, “The Adjustment Team” (which served as the basis for the blockbuster “The Adjustment Bureau,” starring Matt Damon and Emily Blunt). A quick summary of the basic (non-video game) simulation argument Today, the simulation hypothesis has moved from science fiction to a subject of serious debate because of several key developments. The first was when Oxford professor Nick Bostrom published his 2003 paper, “Are You Living in a Simulation?” Bostrom doesn’t say much about video games nor how we might build such a simulation; rather, he makes a clever statistical argument. Bostrom theorized that if a civilization ever got the Simulation Point, it would create many ancestor simulations, each with large numbers (billions or trillions?) of simulated beings. Since the number of simulated beings would vastly outnumber the number of real beings, any beings (including us!) were more likely to be living inside a simulation than outside of it! Other scientists, like physicists and Cosmos host Neil deGrasse Tyson and Stephen Hawking weighed in, saying they found it hard to argue against this logic. Bostrom’s argument implied two things that are the subject of intense debate. The first is that if any civilization every reached the Simulation Point, then we are more likely in a simulation now. The second is that we are more likely all AI or simulated consciousness rather than biological ones. On this second point, I prefer to use the “video game” version of the simulation argument, which is a little different than Bostrom’s version. Video games hold the key Let’s look more at the video game version of the argument, which rests on the rapid pace of development of video game and computer graphics technology over the past decades. In video games, we have both “players” who exist outside of the video game, and “characters” who exist inside the game. In the game, we have PCs (player characters) that are controlled (you might say mentally attached to the players), and NPCs (non-player characters) that are the simulation artificial characters.
Ray tracing has been a major topic of conversation at both GDC and GTC so it seems fitting that that the overlapping conventions would both kick off with an announcement that touches both industries. Today at GTC, Nvidia announced that it has built-out a number of major partnerships with 3D software makers including some apparent names like Adobe and Autodesk to integrate access with Nvidia’s RTX ray-tracing platform in their future software releases. The partnerships with Unity is perhaps the most interesting, given the excitement amongst game developers to bring real-time ray tracing to interactive works. Epic Games had already announced Unreal Engine 4.22 support for Nvidia RTX ray-tracing, and it was only a matter of time before Unity made the plunge as well, but now the tech is officially coming to Unity’s High Definition Render Pipeline (HDRP) today in preview. The technology is all focused on how games render lighting more realistically, showing how light interacts with the atmosphere and the objects it strikes. This technique has already been in use elsewhere but rendering all of this can be pretty resource-intensive which has made the advancements of the past few years to cement this as a real-time system such an entrancing prospect. Nvidia has certainly been tooting the horn of this technology, but there have been some doubts whether this is just another technology that’s still a few years out from popular adoption amongst game developers. “Real-time ray tracing moves real-time graphics significantly closer to realism, opening the gates to global rendering effects never before possible in the real-time domain,” a Unity exec said in a statement. In their announcement, Nvidia boasted that their system enabled “ray traced images that can be indistinguishable from photographs” that “blur the line between real-time and reality.” While the prospect of added realism in gaming is certainly something consumers will be psyched about, engine-makers will undoubtedly also be promoting their early access to the Nvidia tech to customers in other industries including enterprise.