Interview with a Pornhub Web Developer

Regardless of your stance on pornography, it would be impossible to deny the massive impact the adult website industry has had on pushing the web forward. From pushing the browser’s video limits to pushing ads through WebSocket so ad blockers don’t detect them, you have to be clever to innovate at the bleeding edge of the web.

I was recently lucky enough to interview a Web Developer at the web’s largest adult website: Pornhub. I wanted to learn about the tech, how web APIs can improve, and what it’s like working on adult websites. Enjoy!

Note: The adult industry is very competitive so there were a few questions they could not answer.  I respect their need to keep their tricks close to the vest.

Adult sites obviously display lots of graphic content.  During the development process, are you using lots of placeholder images and videos?  How far is the development content and experience from the end product?  

We actually don’t use placeholders when we are developing the sites! In the end, what matters is the code and functionality, the interface is something we are very used to at this point. There’s definitely a little bit of a learning curve at first, but we all got used to it pretty quickly. 

When it comes to cam streams and third party ad scripts, how do you mock such important, dynamic resources during site and feature development?

For development, the player is broken into two components.  The basic player implements the core functionality and fires events.  Development is done in a clean room. For integration on the sites, we want those third-party scripts and ads running so we can find problems as early in the process as possible.  For special circumstances we’ll work with advertisers to allow us to manually trigger events that might normally be random.

An average page probably has at least one video, GIF advertisements, a few cam performer previews, and thumbnails of other videos.  How do you measure page performance and how do you keep the page as performant as possible? Any tricks you can share?

We use a few measurement systems. 

  • Our player reports metrics back to us about video playback performance and general usage
  • A third-party RUM system for general site performance.
  • WebpageTest private instances to script tests in the available AWS data centers.  We use this mostly for seeing what might have been going on at a given time. It also allows us to view the “waterfall” from different locations and providers.

I have to assume the most important and complex feature on the front-end is the video player.  From incorporating ads before videos, marking highlight moments of the video, changing video speed, and other features, how do you maintain the performance, functionality, and stability of this asset?

We have a dedicated team working strictly on the video player, their first priority is to constantly monitor for performance and efficiency. To do so we use pretty much everything that is available to us; browsers performance tools, web page tests, metrics  etc. The stability and quality is assured by a solid QA round for any updates we do. 

How many people are on the dedicated video team?  How many front-end developers are on the team?

I’d say given the size of the product the team size is lean to average. 

During your time working on adult websites, how have you seen the front-end landscape change?  What new Web APIs have made your life easier?

I’ve definitely seen a lot of improvements on every single aspect of the frontend world;

  • From plain CSS to finally using LESS and Mixins, to a flexible Grid system with media queries and picture tags to accommodate different resolutions and screen sizes
  • jQuery and jQueryUI are slowly moving away, so we are going back to more efficient object oriented programming in vanilla JS. The frameworks are also very interesting in some cases
  • We love the new IntersectionObserver API, very useful for a more efficient way to load images
  • We started playing with the Picture-in-Picture API  as well, to have that floating video on some of our pages, mainly to get user feedback about the idea.

Looking forward, are there any Web APIs that you’d love changed, improved, or even created?

Some of them that we would like changed or improved; Beacon, WebRTC, Service Workers and Fetch:

  • Beacon: some IOS issues where it doesn’t quite work with pageHide events
  • Fetch:  No download progress and doesn’t provide a way to intercept requests
  • WebRTC:  Simulcast layers are limited even for screenshare, if the resolution is not big enough
  • Service Workers: Making calls to navigator.serviceWorker.register isn’t intercepted by any service worker’s Fetch event handlers

WebVR is has been improving in the past few years — how useful is WebVR in its current state and how much of an effort are adult sites putting into support for VR content?  Do haptics have a role in WebVR on your sites?

We’re investigating webXR and how to best adapt to emerging spatial computing use cases, and as the largest distribution platform we need to support creators and users however they want to experience our content. But we’re still exploring what content and platforms should be like in these new mediums.

We were the first major platform to support VR, computer vision, and virtual performers, and will continue to push new technology and the open web. 

With so many different types of media and content on each page, what are the biggest considerations when it comes to desktop vs. mobile, if any? 

Functionality restricted by operating system and browsers type mainly. iOS vs Android is the perfect example when it comes to a completely different set of access and features. 

For example, some iOS Mobile devices don’t allow us to have a custom video player while in Fullscreen, they force the native QuickTime player. That has to be considered when we develop new ideas. Android on the other hand gives us complete control and we can push our features to the Fullscreen mode.

Adaptive streaming in HLS is also another example, IE and Edge are picky when it comes to HLS streaming quality, in that we need to prevent certain of the higher qualities, otherwise the video would constantly stutter and have artifacts.

What is the current minimum browser support for the adult sites you work on?  Is Internet Explorer phased out?

We supported IE for a very long time but recently dropped support for anything older than IE11. With it we also stopped working with Flash for the video player. We are focusing on Chrome, Firefox and Safari mainly. 

More broadly, can you share a little about the typical adult site’s stack?  Server and/or front-end? Which libraries are you using?

Most of our sites use the following as a base:

  • Nginx
  • PHP
  • MySQL
  • Memcached and/or Redis

Other technologies like Varnish, ElasticSearch, NodeJS, Go, Vertica are used where appropriate.

For frontend, we run mostly vanilla Javascript, we’re slowly getting rid of jQuery and we are just beginning to play with frameworks, mostly Vue.js

From an outsider’s perspective, adult sites generally seem to be very much alike:  lots of video thumbnails, aggregated video content, cam performers, adverts. As someone who works on them, what are the differentiating features that make adult sites unique?

We work very hard to give each brand some uniqueness at different levels; content library, UX and features sets, and across a lot of different algorithms. 

Before applying and interviewing for your current employer, what were your thoughts on potentially working on adult sites?  Did you have any hesitations? If so, how were your fears to put rest?

It never really bothered me, in the end the challenge was so appealing. The idea of millions of people potentially interacting with features I worked on was really motivating. That proved to be true very quickly, the first time something I worked on went live, I was super proud, and I indeed told all my friends to go check it out! The fact that porn will never die is reassuring for job stability as well!

In as far as end product, sharing that you work on adult sites may not be the same as working at a local web agency.  Is there a stigma attached to telling friends, family, and acquaintances you work on adult sites? Is there any hesitance in telling people you work on adult sites?

I’m very proud to work on these products, those close to me are aware and fascinated by it. It’s always an amazing source of conversation, jokes and is genuinely interesting. 

Having worked at agencies outside the adult industry, is there a difference in atmosphere when working on adult sites?

The atmosphere here is very relaxed and friendly. I don’t notice any major differences with respect to work culture at other agencies, other than the fact that it’s much bigger here than anywhere I have worked previously. 

Being a front-end developer, which teams do you work most closely with?  What are the most common daily communication methods?

We work equally with backend developers, QA testers and product managers – most of the time we simply go up to each other’s desk and talk. If not, chat (MS Teams) is very common. Then come emails.

Lastly, is there anything you’d like to share as a front-end developer working on adult sites?

It’s really exciting being a part of creating how users experience such a widely used product. We are generally at the forefront of trends and big changes in tech as they roll out, which keeps it fun and challenging.

Interview end

I found our interview really enlightening. I was a bit surprised they didn’t use images while developing features and designs. It’s exciting to see that Pornhub continues to push the bleeding edge of the web with WebXR, WebRTC, and Intersection Observer. I was also happy to see that they consider the current set of web APIs sufficient to start dropping jQuery.

I really wish I’d have been able to get more specific tech tips out of them; performance and clever hacks especially. I’m sure there’s a wealth of knowledge to be learned behind their source code! What questions would you have asked?

PHP Spellchecker Library

PHP Spellchecker is a library providing a way to spellcheck multiple sources of text by many spellcheckers. The library provides an abstraction layer with a unified interface for various spellcheckers with support for the following out of the box:

Here’s a quick example from the documentation using the Aspell spellchecker:

<?php
use PhpSpellcheck\SpellChecker\Aspell;

// if you made the default aspell installation on you local machine
$aspell = Aspell::create();

$misspellings = $aspell->check('mispell', ['en_US'], ['from_example']);

foreach ($misspellings as $misspelling) {
    $misspelling->getWord(); // 'mispell'
    $misspelling->getLineNumber(); // '1'
    $misspelling->getOffset(); // '0'
    $misspelling->getSuggestions(); // ['misspell', ...]
    $misspelling->getContext(); // ['from_example']
}

Here’s an example from the documentation for checking spelling in a file:

<?php
// spellchecking a file
$misspellings = $aspell->check(new File('path/to/file.txt'), ['en_US'], ['from_file']);
foreach ($misspellings as $misspelling) {
    $misspelling->getWord();
    $misspelling->getLineNumber();
    $misspelling->getOffset();
    $misspelling->getSuggestions();
    $misspelling->getContext();
}

Be sure to check out the PHP-Spellchecker Documentation for complete details on installation and usage. You can check out the source code on GitHub at tigitz/php-spellchecker.

Developing Blocker-Friendly Websites

The modern internet has become obsessed with tracking, analytics, and targeted advertising. Almost all of your browsing activity is tracked: sometimes by your network provider, and frequently by the hosts of the websites you visit (and anyone they choose to share data with). If you host a website—and many of us do—chances are good you’re running analytics software on it, collecting metadata about your users’ devices and browsing activity.

You may not even know it, but some of your users are likely ghosts: running tracker-blocking software that blocks analytics scripts before they load. Unless you’re also running analytics on your server-side logs, these users will be silently absent from your stats. I can’t say I blame them; the tracking epidemic has gotten out of control and is still spreading like wildfire. However, by opting out of tracking, these users may unwittingly have a degraded experience on your website when third-party scripts your code depends on fail to load.

You can choose to leave these ghost users to suffer the consequences, or you can take a few simple precautions to ensure everyone has a good experience using your website, even when your third-party services don’t load. As a bonus, following these steps will harden your website, make it resilient to downtime you have no control over, and make it more easily accessible to a wider range of people. It may even improve your SEO, since search index crawlers that ignore JavaScript will predictably experience the same fallback mechanisms as users with blocker software.

Keep third-party scripts to a minimum

The easiest thing you can do is avoid loading third-party resources (anything not hosted on your primary domain) as much as possible. In addition to introducing security and privacy concerns, they can be a huge drag on your site performance, and can be unpredictable: they can easily change without you knowing, which is scary for something that has direct access to your site.

Limiting your exposure to third-party resources might mean hosting JavaScript libraries on your own site instead of a CDN, making your own social network “share” buttons using self-hosted images, or just using fewer third-party libraries. In addition to improving your users’ security and privacy, limiting your third-party scripts will almost certainly improve your site performance—often drastically. JavaScript downloading and parsing occupies a huge chunk of the loading time on most websites.

If you’re unable to host your scripts locally, use async or defer to speed up the time until a user can interact with your website. Keep in mind that deferring script loads means that regular users may interact with your site before a script is loaded; it’s a good idea to ensure your website still works without those resources.

Test your site with JavaScript disabled

Most modern internet browsers still provide an option to browse with JavaScript disabled—it’s rare, but some users do it. Furthermore, any third-party scripts you’re still loading have the potential to fail. Imagine what would happen if you relied on a CDN for your core JavaScript libraries, and the CDN went down: this is the same experience many ghost users will have the first time they visit. Checking your website with JavaScript disabled will give you firsthand experience with failure scenarios, so you’ll be able to implement backup solutions.

Display messages when scripts don’t load

Sometimes you can’t avoid using third-party resources, often in mission-critical features like checkout. In these cases, showing messages to your users when required scripts don’t load will ensure your site never feels “broken”.

You can use noscript tags to display messages to anyone with JavaScript completely disabled, but that won’t help ghost users who are only blocking third-party requests, or services experiencing downtime. I highly recommend adding code in critical processes to verify that all required scripts have loaded and, if they haven’t, display messages explaining what happened.

The Electronic Frontier Foundation’s donation page is a great example of this technique in action: when Stripe checkout fails, they show a helpful message explaining what to allow in order to fix it:

Use descriptive image alt text

When an image fails to load, most browsers display the content of the alt attribute in its place. Adding descriptive alt text to all of your images will let ghost users immediately understand your content, and as a bonus, improve accessibility for anyone using a screen reader—the alt text will be read aloud in place of images.

When it comes to descriptions of images, detail is critical. Consider the difference between a caption that says “bowl of soup”, and this descriptive example from Vox Media:

Screenshot of a Vox Media article with image alt text reading "Illustration from above of two hands holding a ramen bowl, which is filled with soup that has two halves of a boiled egg, bok choi, and noodles. two chopsticks are resting on the bowl. other condiments are in front of the bowl."

If your site relies on ad revenue, host acceptable ads locally

Many users running blockers do so for personal security reasons; they’re uncomfortable with widespread data collection and the pervasiveness of malvertising, which is only able to exist because overreaching ad networks want to serve increasingly targeted ads.

If your site relies on ad revenue, strive to host text-based ads or static images on your first-party domain. These ads will avoid ending up on filter lists designed to block malicious actors, allowing you to reach more of your user base. Even better, the rest of your users who aren’t running blockers will enjoy a more secure and private experience on your site.

Help create the internet you want to use

Implementing these techniques will improve the resiliency of your site across the board, and harden it against errors and unexpected downtime. Mobile users in particular—said to make up the majority of all internet traffic—will appreciate faster load times, less data usage, and predictable content placement (without slow-loading ads appearing suddenly). All of your users (including you!) will enjoy better performance and privacy while using your site.

If we want a web that cares about privacy, it’s our responsibility as website creators to exemplify it.


A History of the Internet As We Know It

Can you believe it’s been 30 years since the birth of the internet as we know it? Considering we’re a web hosting company and our entire business is reliant on the internet (aren’t all companies nearly reliant on the internet these days?), we’re feeling a little nostalgic.

So let’s cheers to the internet as we know it and all the geniuses who make the world wide web spin! Here’s a look back at the pioneers.

Speaking of pioneers…

Wait. Who is Tim Berbers-Lee? Tim Berners-Lee is the guy we owe “it all” to. He is the inventor of the World Wide Web and is still the director of the World Wide Web Consortium, commonly known as the W3C. The W3C is like the governing body of the internet, and is responsible for setting the standards for the web such as coding languages, best practices, operating guidelines, and helpful tools.

So of course, you’re going to see his name several times in our history of the internet below, starting with pre-Y2k.

So of course, you’re going to see his name several times in our history of the internet below, starting with pre-Y2k.

Imagine if the Y2K bug really did bring down the house…

Thankfully, the crisis was averted and we have 20 more years of internet awesomeness to talk about!

Thankfully, the crisis was averted and we have 20 more years of internet awesomeness to talk about!

What’s missing?

So, do you have a favorite “internet history” moment that is missing from the list?

Or, what new internet craze do you have up your sleeve for the next 20 years? You could be a pioneer just like Tim Berners-Lee. Just get your ideas online and out for the wonderful world wide web to see!

Disabling HTML5 form validation for Laravel Dusk tests

If you’re testing form validation using Laravel Dusk, you’ve probably hit a scenario where the form submission is blocked by Chrome before it’s sent to the server because of HTML5 form validation.

HTML5 form validation blocks a submission when you add an attribute such as required to an input element, or you use type="email". In your Laravel Dusk tests, you’ll probably see this manifesting as a client-side popup dialog in your failing test screenshots.

This is a really tricky thing to test. Since it happens completely client-side without modifying the DOM, it’s hard to assert that the error appears. Even if you do work out a hacky way to do it, it’s not actually testing that your web application’s form validation is working. You’re essentially just testing that Chrome supports HTML5 form validation, and I’m sure the Google/Chromium teams have their own tests for that. 🙂

Your first thought may be to try to disable HTML5 form validation using ChromeOptions in DuskTestCase. Unfortunately there is no such option. That really only leaves one way: adding the novalidate property to your HTML forms. This attribute, when applied to a HTML form, disables the browser’s native form validation. But this has its own drawback in that it disables a perfectly useful feature that probably provides value to your users.

Thankfully, Dusk allows you to inject JavaScript onto the page at runtime! Using this feature, we can dynamically inject a small code snippet that iterates over the forms on the page and adds the novalidate attribute automatically, just for your tests. By adding this browser macro to DuskTestCase:

Thankfully, Dusk allows you to inject JavaScript onto the page at runtime! Using this feature, we can dynamically inject a small code snippet that iterates over the forms on the page and adds the novalidate attribute automatically, just for your tests. By adding this browser macro to DuskTestCase:

public static function prepare()
{
    static::startChromeDriver();

    Browser::macro('disableClientSideValidation', function () {
        $this->script('for(var f=document.forms,i=f.length;i--;)f[i].setAttribute("novalidate",i)');

        return $this;
    });
}

you can then call it within your tests like so:

$this->browse(function (Browser $browser) {
    $browser->visit('/register')
        ->disableClientSideValidation()
        ->type('email', 'notvalidemail')
        ->click('@submit')

        ->assertPathIs('/register')
        ->assertSee('The email must be a valid email address.');
});

And viola! The result is you’re actually testing your application’s form validation, and you don’t have to give up HTML5 form validation to do it.

Customizable Feedback Component for Laravel

The laravel-kustomizer package is a customizable feedback widget for your Laravel applications. It allows you to implement a customer feedback component on your website using Vue.js, but you can apply it in any frontend tool of your choice.

If you want to use the JavaScript and UI that ships with this package, include them in your layout:

<head>
    <script src="{{ asset('vendor/kustomer/js/kustomer.js') }}" defer></script>
</head>
<body>
    @include('kustomer::kustomer')
</body>
The #kustomer container element needs to live outside your app’s Vue.js container. You can also package the JS and CSS files to customize the widget heavily.

If you’re using Nova, you can use the nova-kustomer tool to implement Laravel Kustomizer into your dashboard:

You can learn more about this package, get full installation instructions, and view the source code on GitHub at mydnic/laravel-kustomer.