Mod Tech Labs https://modtechlabs.com Your workflow. Your way. Fri, 08 Dec 2023 16:39:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://modtechlabs.com/wp-content/uploads/2023/02/Square-Icon-150x150.png Mod Tech Labs https://modtechlabs.com 32 32 Navigating New Waters:The Film Industry’s Rebirth Post-Strikes https://modtechlabs.com/2023/12/04/navigating-new-waters-the-film-industrys-rebirth-post-strikes/ Mon, 04 Dec 2023 19:52:08 +0000 https://modtechlabs.com/?p=3839

A Time of Transformation: Strikes and Their Impact on the Film Industry

The past year has been a landmark period for the film and television industry, marked by significant challenges and transformative changes. The industry witnessed an unprecedented shutdown due to the combined forces of the writers’ and actors’ strikes. These strikes, driven by demands for fair compensation and better working conditions, particularly in the burgeoning streaming sector, caused a ripple effect across the industry. The economic impact of these strikes has been substantial. With production halted for over six months, the industry faced billions in revenue losses. This downtime not only affected major studios and streaming giants but also had a profound impact on the myriad of small businesses and freelancers who form the backbone of this sector. Job losses were widespread, from crew members to post-production teams, reflecting the interdependent nature of our industry. Despite these challenges, the industry has shown remarkable resilience. The Writers Guild of America (WGA) and Screen Actors Guild (SAG) have successfully negotiated new agreements with the studios. These landmark deals include significant gains such as increased minimum wages, a new residual structure for streaming content, and, notably, extensive protections in the rapidly evolving realm of AI technology. These agreements mark a positive step towards a fairer and more sustainable industry for all involved.

Innovation Amidst Adversity: The Shift in Virtual Production

The strikes in the virtual production market, particularly impacting film and television, inadvertently led to a significant shift in the industry’s focus and business models. While traditional media production faced halts and delays, there was a notable pivot towards other forms of digital content creation, such as music videos, game shows, YouTube content, and other online media formats. This shift was not just a stopgap measure but also a strategic move to explore and innovate in spaces less affected by the strikes.

Innovation in Business Models

  1. Diversification into Digital Media:
    Production companies and studios began investing more in digital-first content, recognizing the growing influence of platforms like YouTube and Twitch. This diversification allowed them to tap into new audiences and revenue streams.

  2. Collaborations with Online Creator:
    There was an increased collaboration between traditional production houses and online content creators. These collaborations brought together the expertise of seasoned production professionals with the fresh perspectives and audience understanding of digital natives.

  3. Virtual and Augmented Reality Experiences:
    The halt in traditional production accelerated the exploration of VR and AR for creating immersive experiences. This was particularly evident in the gaming industry and in interactive music videos, which offered novel ways of audience engagement.

  4. Remote Production Capabilities:
    The necessity to work around the strikes led to a rapid adoption of remote and cloud-based production tools. This not only allowed work to continue despite physical restrictions but also opened up global collaboration opportunities.

Impact of Market Halt on Film and Television

  1. Layoffs and Restructuring:
    The prolonged halt in film and television production led to layoffs and restructuring within many companies. This was a direct consequence of reduced revenue streams and the need to adapt to a rapidly changing market landscape.

  2. Shift in Investment Priorities:
    Investors and studios began redirecting funds towards more resilient and adaptable business models, often favoring digital and online content over traditional media formats.

  3. Long-term Strategic Changes:
    The industry started re-evaluating its reliance on traditional production methods. This introspection is leading to more sustainable, flexible, and innovative approaches to content creation and distribution. The strikes, while initially disruptive, ultimately served as a catalyst for innovation and diversification in the entertainment industry. The forced pause in film and television production prompted a rethinking of traditional business models, leading to an embrace of digital platforms and technologies. This shift not only helped mitigate the immediate impacts of the strikes but also set the stage for a more dynamic and resilient future for content creation and distribution.

The Future Ahead

The resolution of the strikes and the reopening of the industry mark the beginning of an exciting chapter. As independent software vendors, we are committed to being at the forefront of this transformation, providing tools, technology, and education that align with the industry’s renewed focus on fairness, innovation, and creativity. Together, we can help shape a film and television industry that is not only more equitable but also more dynamic and thriving than ever before.

At Mod Tech Labs we have been actively working at the intersection of technology and industry for over a decade. Our unique vantage point has helped us build tools that empower creatives and reduce friction in production workflows making it faster and easier to deliver high quality visual content that will perform on every screen, every time.

 

Check out some of our previous work in Best Practices and Standards for AI:

Guidelines for Developing Trustworthy Artificial Intelligence Systems

]]>
Once Upon a Screen: A Tale of Technological Transitions in Film https://modtechlabs.com/2023/08/01/once-upon-a-screen/ Tue, 01 Aug 2023 14:31:16 +0000 https://modtechlabs.com/?p=1388

Well, my dear, gather around. Grandma has a fascinating tale to tell you tonight, one of dreams and creativity, and of how the magic of the movies came to be what it is today. You see, long ago, even before your grandma was a young girl, the movie industry was a simple yet exciting world. People flocked to cinemas to witness marvels unfolding on the giant screens, marvels that were crafted painstakingly by hand with real objects and captured on film.

The earliest filmmakers were sorcerers of sorts, manipulating physical things to create illusions on film. They used miniatures, puppets, and even stop-motion animations to tell stories. As technology began to evolve, so did their tools, and soon they discovered a new kind of magic: Computer Generated Imagery, or CGI as it’s known.

With CGI, these artists could create things from scratch within a computer. They started with simple objects but soon progressed to entire scenes and even complex characters! It was like having an entire universe at their fingertips, all with the power of 3D modeling software. It was a time of boundless creativity and endless possibilities, opening up new horizons for filmmakers.

But, my dear, that was not all. There was another type of wizardry at work – Visual Effects, or VFX. Unlike the physical magic of old or the digital creations of CGI, VFX was all about enhancing and manipulating the footage captured with a camera. It was like adding a dash of sparkle to an already beautiful painting. These VFX artists could take what was real and amplify it, adding a touch of wonder and excitement.

From the 1960s to the 70s, 80s, and 90s, these wizards honed their craft, their spells becoming more powerful and intricate. With every passing decade, movies became more visually stunning and enthralling. Then, something extraordinary happened at the turn of the millennium. A new tool was discovered – digital compositing. This allowed the artists to combine real footage with their computer-generated creations seamlessly, creating an even more immersive illusion.

Today, the spells of VFX are woven into almost every film you see. It has also spread its magic to other realms like advertising, video games, and even virtual reality. But my dear, the story doesn’t end there.

In our magical realm of filmmaking, a new kind of sorcery is emerging, known as In-Camera Visual Effects or ICVFX. This magic blends the virtual and real elements during filming itself, using tools like LED screens to display virtual backgrounds in real-time. This not only reduces the need for post-production work but also allows filmmakers to create even more stunning visual effects.

Now, we stand at the edge of a new era, where real-time graphics and ICVFX are shaping the future of filmmaking. We have journeyed from practical effects, through the digital revolution of CGI and VFX, to this exciting new frontier. The story of film graphics is a tale of constant innovation, driven by magical advancements in technology. It’s a tale of how we’ve enhanced storytelling, making it more immersive and visually stunning.

So, my dear, as you close your eyes and dream of distant galaxies and mythical creatures, remember that these are not just figments of imagination, but a testament to human creativity and technological prowess. This is the magic of film graphics, my dear, and I assure you, the future holds even more wonder and excitement. Now, sweet dreams, and remember, the screen’s the limit!

]]>
A Big 5 Studio https://modtechlabs.com/2023/05/24/a-big-5-studio/ Wed, 24 May 2023 17:38:50 +0000 https://modtechlabs.com/?p=637

A Big 5 Studio

Automation graphic

THE CHALLENGE

This project was a pilot to create a digital backlot using existing imagery data from LiDAR and photogrammetry. There were not any commercially available solutions that could have been able to process this massive amount of data. Processing and optimization of datasets like the backlot typically take months of time and massive purpose build compute capabilities along with large teams for clean up and finalization or rebuilding the entire scene using the scans as reference for an artist generated model.

THE GOAL

This project was a pilot to create a digital backlot using existing imagery data from LiDAR and photogrammetry. There were not any commercially available solutions that could have been able to process this massive amount of data. Processing and optimization of datasets like the backlot typically take months of time and massive purpose build compute capabilities along with large teams for clean up and finalization or rebuilding the entire scene using the scans as reference for an artist generated model.

THE SOLUTION

MOD is first configured as a custom workflow to handle the very large dataset. The main asset is finalized in 10 days. The main asset is then processed for optimization levels targeted at the transmedia use cases noted by NBC. Increased detail and quality of the scans is provided by our texture and secondary map generation allowing the assets to be tunable with layers for a realistic look. Cost decreased by 90%. MOD is able to overcome some of the capture flaws with automated cleanup. Taking processing for optimized assets to 14 days.

THE RESULTS

The overall time to process was significantly reduced from ~102 days to ~21 days, an 800% increase in efficiency. More assets were created with the specifications for different transmedia use cases expanding revenue channels and content deployment opportunities. The internal teams that were using these assets were more efficient because they were able to speed up implementation and have different types of assets available.

Data Input

LiDAR Scans
0
GB of Data
0
Photos
0

Cost

Cost Decrease
89.5%

$400K

Expected

$42K

Final

Time

Time Decrease
83.3%

6 Months

Expected

1 Month

Final

Team

From 30 people outsourced, to MOD

Capture

Initial LiDAR scan and photos provided by outside vendor. MOD is able to overcome some of the capture flaws with automated cleanup.

Share This Post

Subscribe To Our Newsletter

Get updates and learn from the best

More To Explore

]]>
Machine Learning for Production https://modtechlabs.com/2023/05/22/machine-learning-for-production/ Mon, 22 May 2023 18:19:31 +0000 http://modtechlabs.com/?p=459
There are a myriad of different ways that machine learning can be used for production.

Having worked extensively in games, movies, immersive, and XR including entertainment, corporate and even medical our businesses have seen a wide variety of innovations and opportunities to break boundaries. 

One great example is iDrive VR released by Quantum Rehabilitation, the second largest wheelchair manufacturer in the world. The driving simulation took the actual physical chair controls and matched them with the VR controls for a life-like experience learning to drive a power chair. So as a user learns to physically drive a chair, with any type of control they can sit in any kind of chair to drive the VR experience — which technically drives windows. This overcomes the lack of clinicians and in chair experience before users get approved or denied for a chair which is a one-time and final decision. 

The work we did for KPMG was focused on interactive multi-user data visualization. Another early product was Volumation was a volumetric capture and processing solution built for more prosumer use. 

Before we dive into the technical goodness, there are some important foundations that got Mod Tech Labs to where it is today. To start the CTO, Tim Porter,  graduated with a Bachelors of Science in Computer Animation from Full Sail University. He also worked in games and movies including Alice in Wonderland: Through the Looking Glass, Cars, Disney Games, Dreamworks titles, Dave and Busters Augmented Reality experiences and many more. 

Mod Tech Labs got its start in Austin, Texas by becoming a venture backed company in 2020 through investment from SputnikATX, and Quansight Futures. Both the CEO, Alex Porter, and CTO, had been Intel Top Innovators since 2017 as well as City of Austin Innovation Award recipients, and went on to be recognized by the Forbes 100, NVIDIA Inception, Technology Leadership through their work at the Consumer Technology Association publishing standards for Limited Mobility and Diversity and Inclusion for XR, and last, but not least in 2022 won a $1M investment from 43North. 

From the beginning the MOD core tenets are faster processing, automated systems, universal input and output, and hybrid systems processing including cloud and on-premise. 

These core features are directly aimed at one of the biggest issues in VFX, the amount of time it takes to do things. Most tasks are fairly manual, even if they do have automated systems, there’s so many stop gaps and inefficient processes that are linear and fragile.

Automated workflows with universal input and output – meaning any imagery data goes into the system including photos, videos, scans, or models and can be output as any open file type like FBX, OBJ, and many more. Proprietary file types make it hard to work between programs and efficiently move data through the workflow so we got rid of that hurdle. Private secure cloud is one option to scale systems as companies take a hybrid approach. There are certifications available to verify security like the Trusted Partner Network(TPN) powered by the Motion Picture Association, allowing for more options and providers. The costs and obstacles to using public GPU instancing, which is especially relevant to imagery data, makes it less attractive to studios that already run on smaller margins though the model is changing with some of the new tools available from NVIDIA. During the pandemic many companies explored pure cloud and found it to be too expensive, the latest products mentioned above will change this dynamic once again.

MOD leverages machine learning to distribute workloads across a cluster making processing much more efficient and faster. As well as training algorithms with proprietary data to create automated secondary maps, the tunable layers of assets that make them fit in each environment, all these parts are integral to making scenes and objects look better and playback on any screen. 

MOD is  a robust configurable 3D workflow with more than 90 microservices that include image resizing, sharpening, unoptimized mesh and point cloud  creation, voxelization, automated color correction, mesh creation, decimation, cleaning remeshing, texture baking, secondary maps, and texture optimization and many more. For a full list of features feel free to reach out to the team to chat. 

AI-Powered Automated 3D Workflows

Photos/Models
  • Real-time Optimization
  • Stage Profiler
  • Anomaly Detection
  • ML Secondary Maps
  • ML Training
  • Image Resize
  • Image Sharpening
Process
  • Retopology
  • Photogrammetry Mesh
  • Lidar Processing
  • Video Decoding
  • Color Correction
  • Decimation
  • Texture Optimization
Playback
  • Remeshing
  • Mesh Smoothing
  • Degrain / Denoise / Deblur
  • Point Cloud to Mesh
  • Exposure Control
  • Image Conversion

Our long term plan is to continue adding more features and make it simple to drag and drop your desired workflow or pick a template and we will automate the rest.  Other features that in the works include automated delighting, de-shadowing, cutout tools, auto rigging, full FACs and wraps remeshing, full green screen correction, even automatic rotoscoping. Into FBX file encoding optimization, which actually does a single .fbx file for an entire volumetric capture sequence, including textures and meshes and everything all-in-one including the animation. 

This is an automated texture optimization tool that uses machine learning to keep the image scale and quality as it goes along. 

The first mesh from that lineup is in 8k, then followed by 4k, 2k, and 1k. The most amazing part is that it’s really hard to tell a massive quality difference between all of these. It is literally going to 1/64th the size from 8k to 1k.

The is the MOD mesh decimator which does a lot of work to ensure that the actual edging and silhouette are kept. Most decimators do a really poor job at that, but MOD solves for it by keeping a silhouette from the scan. Our algorithm balances between the two meshes to create an optimal output which can be seen in the mesh below. 

If you zoom in closely on the figure in the center of the fountain you can see this is a decimator. Most decimators will not provide a quality like this. Based on our machine learning we are able to minimize the mesh by half from 4.49 million down to 2 million, and the quality is phenomenal.

Automated remeshing —does flow mapping along the entire surface. Notice the quadrilation and then the flow linear processing that goes across this asset. The mesh above shows the lines actually run across, and the algorithm understands a certain amount about not only the topology, but the topology flow and how to produce that topology flow to actually produce an awesome result. The lapel of the jacket on the bust is another key example of edging.

Normally when a remesh or optimization is done the artist will state “I want X number of polygons,” but that is not the most effective process. Instead, with MOD: “I want this much crease amount” and say, “I want to make sure that the edges stayed this true to the original,” and that’s how MOD produces higher quality assets.

Above, notice the first model is the big original raw asset. It is very high res — 1.4 million tris. The second model (MOD Movie Quality Remesh) goes down to 87,000! The quality on the shirt is where to pause and see the mesh to go across seamlessly. Especially if this is not a hero asset. The third model goes down to 28,000 — with similar results. Lastly, a really low res mesh, 13,000 tris. If this is going to be used in a game or something similar, you’ll end up baking in the secondary maps to keep that quality. 

Above is automated texture reprojection using a considerable amount of cage work. Normally, when doing a reprojection an artist would say, “I want a cage that is this big — this many units”. The work on this was very specifically to figure out how big a cage was wanted so there is a large amount of work based on the original images and other kinds of setups — like looks based on different angles. MOD does a lot of work to ensure the cage is the appropriate size.

Texture Reprojection — You can see in the Bust Demo the original texture is on the left side. Then the remake is on the right, and it provides the ability to do whatever is wanted with UVs — a very important feature.

And lastly, automated glTF playback, as seen above, has machine learning primarily in the mesh generation.

MOD mesh generation is different from others because of how we find our cameras in three-dimensional space, but also how we use that camera information to provide secondary features once we get past the dense process, which is how we get that higher quality based off of the textual images. This is often lost through most photogrammetric processes. Going from an initial to a high res mesh, and you’re like, “Okay, cool, I want to optimize it some…” and then what do you do with those details?

One of the big ways that a lot of studios do it is they go in and they hand remap over all of it, with different varying brushes. Removing this manual part ends up automating the process. And this, of course, is just glTF on top of it. These (in the video) are super optimized assets. The first one is only 10mb and then the second one is less than 100mb for a full body sequence… that’s it.

Other features like motion blur reduction as well as machine learning color grading innovate with convolutional neural networks, to understand what an asset is. For example, it takes someone’s skin tone and recognizes it as the same asset over time, and then it does all of the style transfer throughout all the scenes. The MOD style transfer tool allows a user to feed it both the image to color grade and the one that you want the grading to come from and it does an intelligent amount of combination between the two of them.

MOD helps entertainment and media companies scale universal 3D content creation with AI-powered automated workflows. The no-code processing platform adds efficiency by automating workflows to clean, refine and enhance 3D content.  Using photos, videos, scans, or meshes to make 3D using machine learning enhanced processes make the output better with time and fully customizable. The increase in demand for digital objects, people and places requires automation to scale. Teams can minimize skill requirements, time to deliver, and overall costs to make 3D content with MOD.

Share the article and your thoughts – and please tell us what other topics you would like us to cover. 

]]>