Advanced technologies in the Creative Industries: Case Studies


Dan Turner, CRAIC, Loughborough University London


This essay presents an overview of the research undertaken which informed the selection of three case study subjects as part of CRAIC’s ongoing research with XR Stories on the use of advanced technologies within the Creative Industries (CIs). The study explores how technologies such as Immersive (XR), Artificial Intelligence (AI), Cloud Technology, and 5G, amongst others, are currently being deployed and utilised within the CIs sector.

Background

As outlined in our previous essay there have been national and international reports identifying key technologies, such as those mentioned above, that have the potential to affect the creative and cultural industries, with some also having implications for wider society. In order to better understand and exploit the opportunities these technologies present there has been increasing amounts of public and private funding put towards research and development (R&D) within the creative industries. Given that the creative industries is one of the UK’s largest exports it should be no surprise that in 2020 the UK was 3rd in the world for CreaTech investment behind only the USA and China (Turner, 2022). Large publicly funded initiatives have included Creative Industries Clusters Programme (CICP) and The Audience of the Future Challenge (AoTF) both part of UKRI’s wider Industrial Strategy Challenge Fund and CreativeXR, which was a 12-week accelerator programme funded by Arts Council England and Digital Catapult. The creative industries have also been quick to respond to new technological advancements with projects such as 5G Edge-XR launched in 2020 as a collaboration between Government, private, and academic institutions to explore how 5G networks combined with cloud based GPUs can deliver new immersive experiences for users via smartphones, tablets, and TVs. This project takes a broad view of the creative industries to ascertain the range of technologies being used and how they are being adapted and deployed by the various organisations working within the space.

Some of these technologies have started to become established in the public mainstream, being adopted by a range of creative and technology focused organisations. The British mobile network operator EE became the first in the UK to launch 5G in May 2019 and in January 2020 launched an advertising campaign featuring what is claimed to be the world’s first AR multi-location concert delivered over 5G. The performance took place at Birmingham New Street train station and was streamed to London Kings Cross, Liverpool, and Edinburgh. As well as streaming the concert, AR technology was also used to overlap digital assets and effects that could be viewed through the audience’s mobile devices at all locations. A similar campaign was launched in November 2020 but this time using 5G to stream a live AR experience featuring Rita Ora. Ora was transformed into a digital avatar using photogrammetry that was then animated in real time using live data captured and processed from the Xsens MVN Animate inertial motion capture system, which also included real time facial and hand tracking. The experience was streamed live, along with other AR assets, to give the impression of a giant Ora performing amongst London’s cityscape. These examples of technology being used for public broadcast are particularly interesting as they were both one off live performances, using a variety of advanced performance technologies, but used for the purposes of contributing to a marketing campaign to demonstrate the ability of a deployment technology.

Abba Voyage is a very recent example of a highly successful production utilising a range of different capture and deployment technologies. The production is a virtual concert residency that is taking place in a custom built arena located at Queen Elizabeth Park London with a capacity of 3,000. Rather than featuring the band themselves the concert instead features avatars (or ABBAtars as referred to within the context of the production) of the group as they appeared in 1979. These were created through a combination of motion capture and, though not specifically stated, some form of volumetric or photogrammetry similar to the techniques used to create EE’s Rita Ora avatar, and advanced facial capture, most likely using one of Industrial Light and Magic’s (ILM) proprietary systems. These models were then deaged by artists and engineers from ILM. Though motion capture was used to digitise the dance moves of the ABBA performers, these were then recreated and captured for the concert using younger body doubles. This was the data that was recorded and used to animate the avatars. However, it is also of note that though there is a live band on stage providing the music, everything seen on the large screens (Including images of the band) is pre-recorded, so the only live aspect is, in fact, the music. The vocals ‘sung’ by the avatars are also the original recorded vocals from the 70’s. It is also worth noting that this kind of technology and production isn’t readily available to anyone who may wish to use it in this fashion. The technical aspects of the visual production were handled by ILM, who traditionally provide commercial state of the art visual effects and virtual production services to Hollywood productions, a few recent examples are The Mandalorian, Thor: Love and Thunder, and Avatar: The way of water. So while this virtual concert is a very unique application, the technology and costs involved are akin to a major film production. The production costs have been quoted as being £140m.

The creative industries as a commercial sector (not including technology focused research institutions) is often a user, rather than a producer, of new technology and by making ever increasing demands of this technology, pushing it to the limits of what might be capable, the creative industries can often drive innovation from technology producers (Muller, Rammer, and Truby, 2009). It can also be in consultation with content creators that new strands of low level technology research are developed to solve existing challenges within their practice. For example, Turner et. al (2022)* conducted interviews with a group of professional sound designers working across the immersive experience space to gain an understanding of how they approach immersive sound design using spatial audio. Analysis of the interviews highlights several potential research topics, including the development of audio processing algorithms to deal with a lack of spatial sound libraries. However, a large part of R&D within the CIs is focused on innovative use and adaption of existing technologies, such as utilising motion capture systems to drive virtual avatars in real time as was the case in the EE advertising campaigns. For this reason it is important to know the innovations that are taking place, especially for smaller companies that rely on public funding initiatives in order to fund periods of R&D.

Exploring potential cases

Choosing a case for study involves sampling a very small number of example candidates  out of an, often, extremely large potential sample pool, and attention must be given to the method of sampling that is used to select the cases (Seawright and Gerring, 2008). This must be the case even if the aim is not to generalise the results of the study to the rest of the population. The selection and reviewing of the potential cases presented in this work were informed by extensive reading,engagement with, and consideration of, a wide body of creative technology use cases. Interviews were also conducted with experts at UK Research and Innovation (UKRI) and Creative Cluster Director(s) alongside engagement with other key stakeholders, industry professionals, and researchers. Over the course of the longlisting period a number of conferences, events and, where possible, performances were attended in order to experience creative technology projects and experiences first hand. This provided a broad range of perspectives to draw upon and an extensive body of knowledge from both commercial and R&D activities currently being undertaken. Some of these cases are now explored and provide a representative example of the types of creative technology innovation being undertaken within the public and private organisations.

In total 25 experiences were included in the longlist and were reviewed as shown by the examples above. Many more experiences exist within the scope of this research but it was decided to keep the longlist to a manageable size in order to allow adequate time to assess each one during the shortlisting process given the 6 month timescale of this project. From the longlist reviews were carried out, in combination with the below criteria, that informed the shortlisting process in order to select three cases to be taken forward for in depth analysis.

  1. Advanced technologies utilised
  2. Initial impression of novelty with respect to use of technology
  3. Number of different advanced technologies utilised
  4. Type of experience/output i.e interactive theatre, location-based AR, new infrastructure etc.
  5. Current stage of production i.e in production, finished, live
  6. Organisation/s involved
  7. Funding partners
  8. Do we have existing contacts within the organisation

The three projects chosen were Flood by Megaverse, Interchange by Prox & Reverie, and Weavr, which was delivered by a consortium of partners led by ESL

The rest of this essay explores some of the more interesting projects that, for reasons that will be explained, were not chosen as one of the three depth case studies.

StoryTrails

Overview

StoryTrails is a multi-location based immersive experience across 15 locations in the UK. The project is led by StoryFutures Academy funded as part of UKRI’s Industrial Strategy Challenge Fund in partnership with 9 other organisations, from both the public and private sectors. The content of the experience is location specific to each of the 15 sites and consists of three activities; an AR trail, a VR experience, and 3D spatial map installations that users can navigate and explore, such as the one shown below. In this sense StoryTrails is interesting as it could be seen as 15 separate projects connected by a common theme, utilising a central technology framework, and being deployed through a single mobile app. There is also a training element to the project as 50 creative media practitioners were recruited and trained in the use of immersive technologies such as AR & VR toolkits, spatial audio production, and 3D model capture.

Use of Technology

The AR trails direct the user, at their own pace, to different points along a predefined route with AR assets utilising the BFI and BBC archives to tell the history of the specific location at which the user is placed. As part of the AR experience there is also a virtual cinema experience which places an AR structure, such as a historic representation of the users location and projects a short film onto the structure, accompanied by facts about the local area and community on posters, an example of this is shown below. At the local public library for each chosen site (where the trails often begin) there are spatial maps of the town or city that include digital assets captured using 3D modelling and spatial audio from different places across the location. The same framework is used at all sites but each experience requires the user to download the content for that specific experience. Using location data provided by the user’s device it can then overlay AR assets at the correct locations. This uses the same kind of technology as Pokemon Go by Niantic, which is also one of the partners on the StoryTrails project. This allows digital assets to be anchored at specific locations within the real world, though users must grant the app access to their location data. The content for each experience is downloaded independently so it appears the experience does not rely on real time content streaming via mobile data networks. If this is the case it reduces the reliance on the strength and available bandwidth of the users mobile data network.

Alongside the AR trails, which can be accessed by users through the app at any time, there are 7 VR experiences that toured the 15 locations between 2nd July – 19th September 2022. These were based at public libraries through a partnership with The Reading Agency and were delivered through Meta Quest 2 headsets. There were also virtual library installations of “spatial maps” that users can navigate through consisting of 3D modelled objects from across a location in combination with audio stories. It is also worth noting that while the experiences are designed to be accessed at each location, there is an option to view the AR content remotely, making the experiences accessible to those not able to travel to one of the StoryTrail events. When experiencing the content remotely one does need to ensure there is an adequately sized flat surface at an appropriate distance from the phone for the AR objects to populate, otherwise scale and viewing some of the virtual screens can be problematic, as while it is possible to change the scale of the object, it is not possible  to reorientate it. An example of this is shown below where the user is unable to see the virtual screen due to its position, and is unable to reposition themselves accordingly.

AR Virtual Cinema experience viewed remotely
Virtual Screen Obscured due to orientation of AR scene

Limitations

The project mainly spans the CIs subsectors of Film, TV, Video, Radio & Photography and Music, Performing and Visual Arts, as defined in the Creative Industries Economic Estimates Methodology, though given the creation of the app and digital assets it also involves activities that can be classified under IT Software and Computer Games. StoryTrails was  an interesting project to consider as a case study as it utilises a range of digital technologies and undertook activities spanning multiple CI subsectors. It also features a range of different partner organisations as well as having a skills and training component. The capture technology used, such as LiDAR scanning and spatial audio, is used to great effect when combined with AR delivery and allows multiple ways for users to consume the content given the range of different activities for each location. It was also impressive logistically, since it involved significant coordination to design and deliver  15 unique experiences that had a cohesive feel to them, as  part of a singular, nationwide experience. Though a great example of location based immersive storytelling, the reason it was not selected as a case study was because there are well established pipelines for the combination of technologies involved e.g LiDAR scanning environments and objects, turning these into AR assets to be placed back into the word, and capturing spatial audio that users can experience over headphones. While StoryTrails is driving innovation, it is not technology innovation. Markova (2022), positions StoryTrails in terms of “inclusive innovation” by selecting locations with high deprivation indices and low index of cultural participation, the project is bringing the latest in storytelling technologies to more underprivileged communities as well as providing upskilling for local creatives. So while this may not be technological innovation, which is the core interest in our research, it is providing a central technology framework as an intervention which serves to amplify the stories of people, places, and communities. We would also argue that StoryTrails can be understood through the lens of cultural innovation, specifically innovation in value creation (Bakhshi and Throsby, 2011), by providing new opportunities and methods to utilise cultural assets (such as historic archives) to create value for communities whose stories are told and for the audiences that experience them.

Nomad Atelier

Overview

Nomad Atelier is a luxury fashion brand that in collaboration with Future Fashion Factory and University of Huddersfield explored the use of 3D and digital technologies for showcasing products digitally to contribute to the digital growth of the brand and their use of digital tools. The project was three months in length so can be seen as a short intensive piece of R&D that culminated in some interesting outputs. It was also one of only two projects reviewed that sat within the Design and Designer Fashion subsector, the other also being associated with Future Fashion Factory. The R&D for this project was undertaken by academic researchers, but the direction of that research was decided in consultation with the owner of Nomad. This highlights the fact that not all businesses will have the skills or knowledge internally to explore or take advantage of available technological innovations. Although this can apply to many areas of the CIs (traditional screen industries, performing arts, music), as they all have very established workflows and tools, it may be particularly the case within industries such as fashion or crafts which use a very different range of technologies to that of the screen industries who, as we have discussed, may have closer workflows that are more easily adapted for the utilisation of new real time technologies. This creates an additional barrier as the organisation may also not have the financial resources to bring in external practitioners or experiment themselves with projects that will not result in a direct, and immediate, commercial output. Funding which allows the time to experiment without the requirement of a commercial output was cited frequently during XR Stories’ OpenXR video series as a huge benefit to organisations wanting to experiment with new technologies.

Use of Technology

The practical R&D, after a phase of competitor analysis, benchmarking and consultation, tested a range of 3D design applications for the purpose of showcasing garments to customers. The aim was to demonstrate that 3D modelling technologies could communicate the fit and drape of these garments, both stationary and when in dynamic movement. If successful this would then provide several main opportunities for the business, and the customer. From a business perspective it may increase their effective market access, as those who may not live close to the store would then be able, to an extent, see how the clothes look and fit in a much more ecologically valid way than simply looking at a static photograph. These advantages would translate to customers that may be interested in the brand, but do not live near the location. From a sustainability and cost perspective it helps reduce potential material waste and financial overheads by reducing the need for physical samples to be manufactured and the associated cost in then having them modelled. As Parkin (2021) states, it could allow Nomad to gauge interest in potential items before investing to have them made.

Limitations

The team used the Clo 3D, an established application for designing 3D garments used by companies such as Adidas, Arcteryx, New Balance, and Patagonia to generate models of existing Nomad garments. Interestingly, the creators of Clo initially found success when their initial open source software found popularity with cosplay artists. So, while this project is interesting within the context of small business innovation, and the importance of access to and the skills to utilise and explore the use of digital technologies, its use of established platforms and processes meant it sits outside the scope of this research. Given its use of a single, self contained technology platform we also feel it may not have provided as much insight into technology and creative practice innovation as projects utilising a range of different technologies.

The Under Presents

Overview and Use of Technology

The Under Presents is an immersive theatre VR experience by the LA-based art and games studio Tender Claws in collaboration with the Piehole theatre company. Tender Claws describes the experience as a “full single player timeloop based experienced with multiplayer components as a layer on top of it.” Much like the type of immersive theatre pioneered by Punch Drunk, The Under Presents allows the audience to freely move within the world exploring whichever part, or strand, of narrative they choose. The ability of the audience to interact with the narrative is limited as they are unable to speak, but the actors are able to interact with the audience, by transporting them to different places, placing them in interactive situations such as on-stage performances, and even placing them in a cage if they “misbehave”. The tools available to the actors allow them to interact with audience members and the environment to guide the audience’s experience, which perhaps makes up for the limited abilities given to the audiences. While utilising VR allows greater advantages in terms of providing experiences that would not be possible within a reallife theatre production, it stays very true to the immersive theatre genre with respect to audience interaction. The audience are still mostly spectators, a group of wandering audience members as Stein (2019) puts it.

The experience was available on Facebook’s Oculus Quest and had limited runs, much as any real world theatre show would. Once the live runs were completed the experience was still made available as a multiplayer experience, but with all the action being pre-recorded. This is certainly novel and it gives people the opportunity to experience the production, albeit an arguably diminished version of itself. The production certainly succeeds with blending traditional immersive theatre with a VR experience, and introducing several novel features. The actors, when entering the experience, have a virtual backstage dressing room where they can choose  their character’s appearance and abilities, such as instant transportation (of themselves and audience members), or the spawning of items. The scenes and situations are also able to be of a nature that may not be safe or possible to portray using traditional theatre. Its availability as a standalone experience after the live shows have finished also creates replayability and an increase in market access. audience members are able to play through the experience multiple times, each time choosing to take different paths and access  different aspects of the story. As it also doesn’t limit the experience only to those who were able to attend the live shows, there is the possibility  to also generate a certain amount of passive income for the companies involved. Some of The Under Presents’ strengths are also causes of some of its limitations. The fact it’s on a consumer headset provides easy access to a wide audience that isn’t limited by location, but may be limited by time zone as the performances were afternoon and evening Pacific Standard Time (PST). Being on a consumer headset also limits the processing capabilities with respect to graphics given the content has to be live streamed and rendered to the user’s device.

 

Limitations

The Under Presents is an interesting exploration in how VR can help produce the next generation of immersive theatre, and be delivered using consumer level technology. The industry however is moving quickly, and there were other projects reviewed which take this concept but are undertaking more extensive R&D in order to see what is possible using professional grade XR technologies to create high quality, interactive, location-based experiences. The technology platform and pipelines involved are also well established as multiplayer VR games are relatively commonplace. So while an interesting project that succeeds in pushing the boundaries of immersive theatre, and makes the experience available to a wide audience, within the context of technology innovation there are other projects that are producing work more in line with the research questions we were seeking to answer.

Reflections

One thing that become apparent during this process was that the use of many of the technologies of interest were focused mainly within two CIs subsectors, Film, TV, Video, Radio & Photography and Music, Performing and Visual Arts. We see at least three factors that may contribute to this. Firstly, these technologies are currently more easily and obviously applied to those areas of the CIs. For example, real-time game engine technologies are deployed similarly in their traditional application within the video games industry (e.g world and asset building) and their application in the screen industries (e.g Virtual Production and VR experiences), and perhaps more so than they might be to e.g. the advertising industry. Secondly, there is potentially a lack of access to the finances required to undertake exploratory projects using new technologies that may not result in a commercially viable output. This is most likely to be the case for small to medium companies who will not have the financial resources to undertake self funded programmes of R&D. Data collected by UKRI on their Creative Industries portfolio concluded that between 2016 – 2021 there were 885 projects with identifiable application to the CIs with a total qualified investment of £248,585,415. The Film, TV, Video, Radio & Photography and IT, Software, and Computer services sectors accounted for 23.7% and 28.5% of the total investment respectively. While it is beyond the scope of this project to ascertain the reasons behind the distribution of funding, it does support our observation that technology R&D within the CIs is, at present, focussed mostly within a few subsectors. It’s also worth noting that activities concerning sub sector 6, as mentioned previously in the context of StoryTrails, will be commonplace throughout technology related R&D projects. Thirdly, it should also be noted the potential inherent biases in case study selection: our professional closeness to some of CIs subsectors, means that it is easier to discover examples of projects that fit within the scope of this research than  for those sub sectors that we have less knowledge of.

The next essay will provide an in depth exploration of our three case studies in relation to our research questions and where they sit within the wider context of creative technology innovation.

 

*Turner, D. Pike, C. Baume, C. and Murphy, D. (In press, 2022)”, ‘Spatial audio production for immersive media experiences: Perspectives on practice-led approaches to designing immersive audio content’, The Soundtrack, 13:1, pp. 73–94.