Issued April 30, 2019, to Microsoft Technology Licensing, LLC
Priority Date May 12, 2017
U.S. Patent No. 10,278,001 (the ‘001 Patent) relates to streaming video platforms and specifically those used to stream video games such as Twitch where people stream playthroughs and esports competitions. The ‘001 Patent details a system that can generate recorded video data along with corresponding audio. This data can then be streamed and rendered on multiple computers. The system is able to automatically generate customized scenes of game play by selecting a location and direction for a virtual camera perspective to record. The camera perspective can also be used to “follow the action” of the game play. The invention enables spectators to observe recorded events or live events being streamed in real time. This technology can also be used to create an instant replay of relevant game play. This will allow for improved viewing of game play content being streamed.
The techniques disclosed herein provide a high fidelity, rich, and engaging experience for spectators of streaming video services. The techniques disclosed herein enable a system to receive, process and, store session data defining activity of a virtual reality environment. The system can generate recorded video data of the session activity along with rendered spatial audio data, e.g., render the spatial audio in the cloud, for streaming of the video data and rendered spatial audio data to one or more computers. The video data and rendered spatial audio data can provide high fidelity video clips of salient activity of a virtual reality environment. In one illustrative example, the system can automatically create a video from one or more camera positions and audio data that corresponds to the camera positions.
1. A computing device, comprising: a processor; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the computing device to receive session data defining a virtual reality environment comprising a participant object, the session data allowing a participant to provide a participant input for controlling a location of the participant object and a direction of the participant object, analyze the session data to determine a level of activity associated with the participant object, determine a level of activity associated with one or more virtual objects, select a location and a direction of a virtual camera perspective, wherein the virtual camera perspective is a first-person perspective projecting from the location of the participant object, wherein the direction is towards the one or more virtual objects when the level of activity of one or more virtual objects is greater than the activity level of the participant object, and generate an output file comprising video data having images from the location and direction of the virtual camera perspective, wherein the output file further comprises audio data for causing an output device to emanate an audio output from a speaker object location that is selected based on the location of the virtual camera perspective, wherein the audio output emanating from the speaker object location models the location and direction of the virtual camera perspective.