Cinematoraphy NAB 2018: Artificial Intelligence in the spotlight

Thảo luận trong 'ENGLISH' bắt đầu bởi Jose Antunes, 24/3/18.

Lượt xem: 452

  1. Jose Antunes

    Jose Antunes Guest

    [​IMG]

    From production to distribution, Artificial Intelligence is entering the industry of filmmaking. The 2018 edition of NAB will dedicate time and space to showcase some of the developments that can benefit professionals.

    This may be the year of Artificial Intelligence. At the 2018 NAB Show, under the umbrella of next-gen media technologies, AI will be one of the themes, along with immersive media and cybersecurity. The organizers of the NAB Show invite visitors to “get an insider’s look into disruptive tech” and suggest that the conference programs put a spotlight on “cutting edge technology that is reshaping the creation, distribution and consumption of entertainment content. Join thought leaders and catalysts from the entertainment, consumer electronics, technology, and service industries for an insider’s look into the emerging technologies disrupting everything from the creative process to business models and consumer behavior.”

    The titles for the conferences on IA/Machine Intelligence, schedule for the whole morning of April 9, suggest what the future will bring to us. “Machine Intelligence: The Evolution of Content Production Aided by Machine Learning”, is the first, but the following conferences have titles like “Optimizing Production with Neural Networks”, “How Machine Intelligence is Transforming Editorial”, “New Frontiers in Animation and Computer Graphics”, “From Dailies to Master – Machine Intelligence Comes to Video Workflows” and, finally, “ The Future of Content with Machine Intelligence”.

    This half-day series looks at machine learning, deep learning and artificial intelligence technologies and at how studios, networks and creative service companies are using them to help produce content. According to the organizers, participants will hear how they can boost productivity, efficiencies and creativity in production planning, animation, visual effects, editorial, post and localization. They will also see what AI and ML apps are capable of doing right now and glimpse their long-term potential to alter jobs and workflows. Listen to top technologists, production executives and filmmakers share the latest research and case studies and learn how you can prepare for a future with machine intelligence as your team member.

    [​IMG]

    Can androids make movies?

    The presence of Artificial Intelligence at NAB 2018, though, will be felt all throughout the event. It starts April 8, with the conference “Do Androids Dream of Making Movies”, from the series dedicated to the Future of Cinema. The conference reflects on the fact that the M&E industry has been buzzing with use of Artificial Intelligence and Machine Learning, and asks if movies can “be created and improved through AI? Could robots replace humans for storytelling? Is Sci-Fi becoming Science Non-Fiction? This session will look at current examples of using AI and ML in content creation and where these technologies could take us in the future of making movies and cinema.”

    Elon Musk may want to build the first cities on Mars very soon, but according to the conference “Smart, Safe, FUN Cities: AI, AR, ATSC 3.0 & Urban Opportunities”, which happens April 10, we should “forget outer space, because the city of the future is here on Earth”. The title and initial presentation may sound strange and not related to cinematography, but you’re wrong. The conference’s theme is all about production and distribution of moving images. The presentation text says this: “ Historic towns are revitalizing, threading digital infrastructure throughout landmark districts. Others are springing up in suburbia, purpose built for digital video and interactivity from the ground up. Out-of-home digital signage faces a boom, with the expectation that through mobile and advanced tech integrations will be prevalent, along with bots, targeted ads, seamless payments (yes, blockchain) and other transactions. This virtuous circle yields more data, channels for video networks, and places for brands to play. A good time will be had by all.”

    [​IMG]

    Hacking the semantics of cinematic stories

    Also on April 10, the conference “Audience Genomics: Neuroscience & Machine Learning Practices To ‘Hack’ Audience Segmentation” takes participants in another direction, stating that “the economics of media audiences is changing in profound ways: limited demand (no more than 24 hours in a day) and the explosion of supply (cheap or free content) is creating radically new behavior and cognitive models. Expert and empowered audiences are less prone to top-down marketing, they demand novelty in storytelling, and don’t fit the traditional 4 quadrant segments.“

    The result is evident: “the media and entertainment industry needs to change not just the tools it uses to develop audience insights, but its entire way of thinking about its products and market. Luckily, the explosion of available data, as well as new methods and tools drawn from artificial intelligence and neuroscience can help the industry create better models of audience behavior. This keynote will lay out these new methods, models and tools.”

    The same day, the “hack” continues looking at another aspect of the industry. The conference “Content Genomics: Neuroscience & Machine Learning Practices to ‘Hack’ Content Recommendation” departs from the idea that “for over a century, the media and entertainment industry has used cinematic information to tell powerful stories, and those stories have resonated with audiences” to ask “But why? and how? What is meaning of green? What is the meaning of a drop cut?”. The questions are the cue to supply the answer: “The emergence of machine learning and neuroscience are, for the first time, leveraging the tools of hard science to ‘hack’ the semantics of cinematic stories, and the intense relationship between content and audiences. This panel will lay out these new tools and methods, and explain why content recommendation is about to get a lot more granular.”

    [​IMG]

    How AI can change the next-gen NLEs

    Artificial intelligence (AI) has been the buzzword of most tech talks for the last couple of years. Beyond the buzz, though, there’s still a gap between the technology itself and most media workflows. Content owners and creatives alike can harness AI’s full potential to work faster and create even better stories. That’s what the conference “AI- From Buzz to Bucks” wants to highlight. How to lower costs, “reduce publishing time, create a better experience for both editors and end-users, and how it can even make your content more human” are themes explored in this conference.

    Aprill 11 is the day to look at what the future will bring regarding Non Linear Editors. The conference “How Advances in AI, Machine Learning, & Neural Networks Will Change Content Creation” looks at how Digital Nonlinear Editing Systems (DNLEs) have functioned as a practical replacement for film and videotape editing methodologies for almost 30 years, but reminds us that “while DNLEs have progressed in their capabilities by offering increased video resolutions and visual effects capabilities, they have not fundamentally changed their operational constructs. An editor still must choose the in points and outpoints of a shot and then methodically edit those shots together into a cohesive sequencing.”

    Technology improvements in Artificial Intelligence, Machine Learning, and Neural Networks will profoundly impact not only how DNLE systems operate but the very nature by which content is created in an automated fashion without human intervention. This paper, under the Broadcast Engineering and Information Technology Conference umbrella, “will examine the effects of image recognition, natural speech processing, language recognition, cognitive metadata extraction, tonal analysis, and real-time data and statistical integration and analysis on the content creation process.”

    [​IMG]

    AI: your next production-assistant

    The topic on “How AI will Take Productivity in the Broadcast Industry to the Next Level” will, no doubt, generate some discussions, as it reflects on the potential of using AI to mimic creative human behavior instead of merely following inflexible automation processes. By learning to replicate the creative imagination of humans, AI will act as a built-in production assistant – embedding itself in live workflows to supplement operator resources. The speaker will showcase examples like AI-assisted framing, highlights creation, intelligent camera calibration and graphics insertion, and, ultimately, complete program direction – ushering in the artificially-intelligent live production workflow of the future.

    “AI-Driven Smart Production” is the title of another conference scheduled for April 11, which will explain how this new technology conveys information to viewers using artificial intelligence (AI) quickly, accurately, and automatically. AI-Driven Smart Production consists of two technical components. The first component uses program production assistance technologies to automatically extract useful information from big data and present it to the program producers. The other component uses a technology for automatically converting broadcast data into forms that can be easily understood by all viewers, i.e., sign language CG animation for the hearing impaired and automatic audio description generation for the visually impaired. Some of these AI-Driven Smart Production technologies are in actual trials at broadcasting sites toward practical use, and can be expected to transform the working style of program production and to enhance the abilities of program producers.

    “Looking Ahead: How AI is Powering the Intelligent Future of Video Editing” is yet another conference, this one looking at how content providers are integrating AI technology into their workflows. There is a notion that “implementing cognitive technology cuts down manual processing time, enhances content search and discovery, and delivers key insights; therefore streamlining workflows and freeing up teams. The speaker, IBM’s Pete Mastin, will discuss the key indicators of successful AI-powered cloud platforms, as well as how media companies can leverage AI to extract actionable insights from their video content.

    [​IMG]

    Improving Quality, Increasing ROI

    According to Statista, businesses worldwide are forecast to spend a total of 204 billion dollars on digital advertising in 2018, so the “AI-ding Advertising and Increasing Your ROI” conference on April 11 looks at how brands must leverage AI to align marketing messages with video content, identify specific viewer characteristics and analyze the impact of product placement, in order to optimize spending and increase total revenue.

    This session reveals “how businesses can unlock the power of AI to make the most of their spending through intelligent ad targeting and analysis. Aligning advertisements with video content: Armed with granular knowledge of video topics, advertisers can place content where it strategically fits, ultimately driving the most impact. Delivering highly targeted ads: By aggregating viewer engagement metrics and social media data, advertisers can deliver marketing messages to the most relevant audiences. Assessing the value of product placement: Through automatic logo detection, AI helps advertisers evaluate the performance of logo insertion within video content and adjust strategy accordingly.

    Internet video streaming has revolutionized the way media is consumed, but it also has brought lower stability and consistency compared to traditional broadcasting quality. The conference “Machine Learning for OTT: improving Quality of Experience with Data” look at known problem, the notion that “Quality of Experience is reduced as congested links and connectivity losses becomes a trend” and shows how Machine Learning and centrally-managed overlay networks can be used as key techniques to allow the detection and prediction of Quality of Experience issues, as well as improving the streaming performance and scalability of an OTT broadcasting network.

    [​IMG]

    Captioning as an essential feature

    Television is not forgotten, and the conference “The AI-Powered TV User Experience” tells attendees that “the traditional UV user experience is sluggish, cumbersome, and one-size-fits-‘some’” to show how “with new services crowding the market and content offerings growing exponentially, operators struggle to differentiate. New UX management technology based on data, learning algorithms and pervasive automation is changing the rules of the game.”

    The last conference, on April 12, titled “How Can AI Elevate Your Closed Captioning Solutions?” reveals that “85% of Facebook video is viewed without sound (source: Digiday), marking viewers’ rising inclination towards captioned content.” It’s an interesting discovery in a world we associate with the idea of sound & vision, as it suggests captions are essential, meaning that as video cements itself as the dominant form of modern communication, companies are faced with the challenge of developing compliant captioning solutions that are accurate, synchronous, and complete.

    Artificial Intelligence seems to be the keyword, here, as “with AI-powered technology, media companies can provide elevated captioning that meets these requirements and goes beyond simple transcription. Specifically, this technology can automatically identify background audio descriptions such as gunshot sounds, thunder, and footsteps- therefore providing more comprehensive captions for viewers. In this session, attendees will discover the ways AI can assist companies in building end to end close captioning solutions with a level of accuracy not available elsewhere.”

    The 19 conferences and panels centered on Machine Learning and Artificial Intelligence applied to different areas of the industry represent an unique opportunity to understand how these technologies are changing the landscape for movie making professionals. The conferences are complemented with a visit to the A.I. Experiential Zone, which enables attendees to see how machine learning is transforming the media and entertainment industry. According to the organizers, the A.I. Experiential Zone is a dynamic, educational showcase featuring real-world applications and content workflows for automatic speech recognition, natural language processing (NLP), deep learning-based image and video analysis, and more.

    The post NAB 2018: Artificial Intelligence in the spotlight appeared first on ProVideo Coalition.