DesignTechnology

Rendering Pipeline for Native Ads

By October 23, 2017 No Comments

Native advertisements are advertisements which resemble the UI/UX of the platform on which they are being displayed. On Facebook, native ads are seamlessly included in your news feed. A native ad in a game would be one which fits seamlessly with the user experience of the game. Different games vary considerably in their UI/UX. 

At GreedyGame, we provide a platform for game developers to monetize using native advertisement units in mobile games. We needed a scalable system that could handle creating native ads across this wide range of inventory.

Rendering machine

World Cricket Championship 2: In-game

Our challenge is to generate native advertisement units given the images and text resources we get for an advertisement, and to do this in a way that works across a wide inventory of games. The resources we get are images of the brand, and the text would be a description of the same, or a “Call To Action” such as “Play now”. All this must be done in realtime(pseudo).

Scale

A problem we faced was how to generate these ads across a wide range of games. We get ~15000 unique requests for creatives in a day, and the creatives need to be generated as fast as possible in order to keep the leakage rate low.

Design Philosophy

We use a simple abstraction in the form of layers when generating the units. Each layer is simply a resource that can be “drawn” onto the final unit.

Rendering machine

Different types of layers

The concept of layers is similar to what you would expect in a graphics editor. Every layer supports certain operations, which morphs the content of the layer. The layers we support are:

  • Image: This is a placeholder for the advertisement image we receive from our partners
  • Frame: This is a static image that is provided by the designers of the unit. There can be multiple frames and they can be ordered anyhow
  • CTA Icon: This is a icon chosen based on the CTA. This allows us to show a relevant icon for the campaign. For instance, a download icon could be shown if the CTA says “Install now”
  • Text: This is a placeholder for either the CTA or the title of the advertisement

1--1IK2fscl_OzS_V1Muyp9g

Assembling Different Layers

Some of the layers are fixed. This includes the base frame, or other overlays/decals that are to be applied to the image. Other layers are not known until we have the actual advertisement, so those layers serve as placeholders until the actual content is obtained.

1-obTdl0BJvact1bzRzXq2Xg

The final unit is assembled in real time

1-obTdl0BJvact1bzRzXq2Xg

World Cricket Championship 2: Main Menu

Since our affiliate partners(who provide us programmatic ads) conform to OpenRTB, we can always expect a number of resources, such as the icon and the CTA.

The individual layers can be created concurrently, but the final composition of the layers can not. As each game has multiple units that need to be created, we currently create those different units concurrently, using the multiprocessing.dummy module.

The rendering engine makes sure the ad image fits as best in the available space as it can. This is done by either trying to match the width or the height of the available space, and scaling the image up or down while maintaining the aspect ratio. In addition to this, it can perform other operations such as filling unused space with a blur fill. It supports UI operations including padding and alignment, so that designers have basic functionality to play with when creating units in the native format.

This allows us to create a lot of interesting ad placements:

In-game advertising

World Cricket Championship 2 : In-game 

native advertising

Bus Simulator: Main menu

Configuration

In order to create the ad units, we have a JSON configuration file that is provided for each unit. This JSON file specifies where the advertisement image should be place, what other frames should be placed, and text units.

[
  {
    "type":"frame",
    "path":"path_to_frame_image.png",
    "x":0, "y":0, "width":650, "height":230
  },
  {
    "type":"image",
    "x":37, "y":29, "width":171,"height":171,
    "operations":[
      { "name":"blur-fill" }
    ]
  },
  {
    "type":"text",
    "x":234, "y":71, "width":399, "height":78,
    "operations":[
      { "name":"color", "argument":"#000000" },
      { "name":"font",  "argument":"Roboto-Regular.ttf" }
    ]
  }
]

Rendering Layers

The rendering is done using the wand library. Wand is a Python wrapper over ImageMagick. We use this library to perform all our image operations. The final ad units are rendered layer wise. Some of the operations the rendering engine supports are:

native advertising

  • Using Gaussian blurred versions of the ad to fill empty space:Sometimes, the advertisement image doesn’t fit exactly into the frame provided. We use a blurred version of the image to cover the unused space.
  • Figuring out an appropriate font size to best fit the text: We try to pick a large font size, and iteratively reduce it till the entire text can be displayed in the space available. However, if the font size is so low that it is unreadable, we reject the ad. The idea is that we want the text to be readable, and to not show any ads that can’t be deciphered by the user.
  • Mapping text to relevant icons: We have a system that can choose an appropriate icon to display depending on the CTA. This allows us to provide a more visual representation of the CTA.
  • The rendering engine also does certain checks, such as ensuring that the font used supports all the characters in the text.

    Infrastructure

    The rendering engine churns through over 15000 requests a day. Since the response time of 3~4s is too high to make it an API, we use RQ to enqueue jobs. Ad units are uploaded to S3 as they are generated. Once all the ad units are generated, the rendering engine hits a postback API and sends the URLs of the generated ad units.

    Future Work

    We will work on optimizing the codebase to ensure that the leakage rate is minimal. This will require working on both I/O bound tasks such as hard disk reads and network operations, as well as CPU bound tasks: the actual rendering of the images. For example, a simple optimization we made to make our gaussian blur fill operation faster was resizing the image down to 10% of its original size, blurring it, and then scaling it back up.

    The rendering engine is a crucial part of the delivery backend at GreedyGame, and we will continue to work on and iterate over it.