// MINIMA

hello@minima.nyc

We turn complex software problems into elegant solutions.

Minima works at the intersection of:

  • Machine learning
  • Artificial Intelligence
  • Cloud computing
  • Video streaming

For top-tier companies, including:

Apple logo Microsoft logo Walmart logo Amazon logo Tom Ford logo Coca Cola logo

We value simplicity, clarity, and collaboration.

With over 10 years of experience together as a team, we've honed our ability to make swift and pragmatic technical decisions.

Assembling and training a dev team capable of working as a tightly integrated unit is an immense challenge. Minima is already that team. Let's get to work for you.

What We Do

  • System Architecture
  • Cloud infrastructure (AWS, GCP)
  • Nature Language Processing and Machine Learning
  • Video and Streaming
  • Data and Analytics
  • Full stack Development
  • Migrations and Integrations

“How should we structure this new system?”
“Is our current system design optimized for scale?”
“Should we build vs. buy this part of our application?”
“Should we move from provider X to Y?”
“We need a team for Project X so the core team can stay on track.”
“We need a seasoned team to knock out this new product / feature.”
“We lost report capabilities after the GA4 transition, what can we do?”

Who We Are

Our ten years working together is our edge: we communicate fluidly, solve problems quickly and deliver with a minimum of fuss.

Matt Walsh

Software architect.

Or Zubalsky

Software engineer, designer.

Carter Henderson

Full stack developer, infrastructure engineer.

Work Examples

"This video library is a colossal mess!"

  • Moving, transcoding & organizing half a million videos
  • Video transcoding
  • Data mapping and migration
  • On-premises and Cloud computation

We recently undertook a monumental project in which we migrated a 500,000 video library to a cutting-edge system. Our client's assets were accumulated over 10 years in an understandably disorganized manner, and they needed to unify their video and metadata, as well as standardize it all for modern playback.

We leveraged our experience with a wide variety of technologies including FFMPEG, shell scripting, myriad AWS services and elbow grease to re-transcode each video from an array of legacy formats and aspect ratios into a consistent and contemporary streaming format.

Additionally, we imported millions of records of metadata in various states of disrepair, normalizing it and converting to a modern database format. This facilitated seamless content management and allowed for the integration of a targeted ad system that proved to be very lucrative for our client. We optimized costs by leveraging both local and cloud resources.

"Everything is everywhere but we can't combine reporting on it"

  • Normalizing analytics from OTT channels across disparate contexts
  • Data pipelines
  • Analytics visualization
  • BigQuery and multiple AWS services

A client recently came to us to find a way to integrate data points from third-party proviers into a single cohesive, coherent system. We tackled the issue by aggregating fragmented data from sources ranging from Google Analytics to raw Cloudfront logs.

Data flowed from multiple sources including a custom ETL and Google Analytics 4 into Google Cloud's BigQuery, where we meticulously parsed the event models for each data row. Through scheduled queries, we refined and standardized the data, enriching with the client's proprietary app metadata. The result was a harmonized data landscape, enabling our client to access comprehensive, unified reporting across all their end-user contexts through LookerStudio.

In the modern video landscape, leveraging third-party providers for various tasks is a common practice. However, this approach poses challenges for achieving a cohesive analytics overview since each system generates its distinct data points. Moreover, video consumption occurs across diverse platforms, including web, embeds on external sites, within apps across devices of all kinds, and even in live linear broadcasts, each having its unique event and metadata structures.

"It's gotta work for 1 or 1 million users"

  • Scalable infrastructure across front-ends, back office, and video processing for a single cohesive system
  • System architecture
  • Cloud infrastructure
  • Multiple AWS services

Over several years, we built a wide-ranging globally scalable application, with a consumer-facing front-end, a client-user-facing back office, and a behind-the-scenes video processing system. This comprehensive system was designed, built, iterated & maintained over years and used by hundreds of millions of users globally.

The front end services tens of millions of users monthly, using multiple caching & load balancing layers including AWS Cloudfront, Redis on AWS EC2, and AWS Application Load Balancing (ALB).

The video processing system transcodes, transcribes and organizes thousands of videos per week using EC2 for processing, S3 for storage, Transcribe and SNS for system notifications. Everything comes together in the back office Video CMS, utilizing EC2, S3, Cloudfront, ALB, and RDS. Supplemental pieces of the ecosystem also use Glue, MediaTailor, MediaLive, Lambda and more.

“How do we serve relevant videos on our uncategorized content?”

  • Content To Video Matcher
  • NLP and ML
  • API design
  • Deep analytics

We developed a technology to match a page's content with appropriate video content in order to build companion playlists for each page. The pages themselves were not tagged or categorized, so we implemented machine learning (ML) and natural language processing (NLP) libraries in Python to interpret the content to find the right videos.

At the system's core was an engine, which coordinated various "rule sets", based on the nature of the client's installation. These rule sets contained a set of algorithms for matching the content that would be configured within a particular install, which enabled us to tune the system for optimal results. For example, we found that some client's users preferred more topical results, and other preferred particular videos regardless of content. So we employed different rule sets based on each install. 

Afterwards, we used our analytics to determine which approaches were working best, and employed machine learning technology to further refine the results based on the popularity of each video within a given context. This optimized the video playlists to encourage users to continue watching, this increasing ad calls.

Ultimately, this resulted in millions of dollars of ad revenue for our clients over just a 2-year period.

Contact Us

Let's talk about your project.
Get in touch at hello@minima.nyc.