John Elliott

Logging From Cloudflare Workers

I first tried Cloudflare Workers when building an HTTP cache with custom content negotiation with an origin server. Having the CDN/cache, serverless environment, and the Workers HTML rewriter was a handy set of features for our site architecture. We already had an enterprise level account with Cloudflare, so trying Workers out was a natural first step.

Getting started with Cloudflare Workers was exciting but not straightforward. After project configuration I realized I had no idea how to view logs.

Unlike Amazon or Google’s cloud function offerings, Cloudflare Workers has no log viewer. It’s bring-your-own. Cloudflare suggests HTTP as the main way to get logs from Workers. In certain cases it’s the only debugging option.1

In this post I’ll share how I developed my solution to logging from Cloudflare Workers.

Building on basic HTTP logging

Basic logging looks like this (from the docs). As they note, console messages can get canceled once an HTTP response is sent.

addEventListener("fetch", event => {
  // Without event.waitUntil(), our fetch() to our logging service
  // may or may not complete.
  event.waitUntil(postLog(stack))
  return fetch(request)}

  function postLog(data) {
    return fetch("https://log-service.example.com/", {
      method: "POST",
      body: data,
    })
  }
})

This is suitable for confirming your service is receiving logs. For sustained work and for production we can gradually improve ergonomics and performance.

Next I’ll review three improvements to the logging setup.

1/3: Improve ergonomics

Wrapping waitUntil with a closure give us our biggest single improvement. This avoids boilerplate.

addEventListener("fetch", event => {
  const log = LogV1(event.waitUntil.bind(event))
  log('hello')
})
function Log1(waitCallback) {
  return function logWithWaitCallback(message) {
    return waitCallback(fetch(LOG_SERVICE_URL, {
      body: message,
      method: 'POST',
    }));
  };
}

Mirroring our message to the console for errors and cases where we’re using wrangler tail or the Workers Quick Edit UI. This is like the tee function in Unix.

Supporting more than one argument adds the familiarity of console.log. We can apply the arguments to both our message sinks.

Let’s see those two additions here:

function LogV2(waitCallback = Promise.resolve.bind(Promise)) {
  return function logWithWaitCallback(...strings) {
    console.log.apply(null, strings);
    return waitCallback(fetch(LOG_SERVICE_URL, {
      body: strings.join('\n'),
      method: 'POST',
    }));
  };
}

Organizing

As I begin building up this logging, it became clear that I could go wild adding more behavior. Breaking it down to smaller chunks was practical at this stage.

Here’s the above example decomposed:

// Logger
function LogV3(waitCallback)) {
  return function logWithWaitCallback(...strings) {
    return waitCallback(
      teeV1(strings)
    );
  };
}

// Log the same thing to HTTP and the console
function teeV1(a) {
  console.log.apply(null, a);
  return postToLogServiceV1.apply(null, a);
}

// Log to HTTP log service
function postToLogServiceV1(...strings) {
  const body = strings.join('\n');
  return fetch(LOG_SERVICE_URL, {
    body,
    method: 'POST',
  });
}

2/3: Adding the ability to call from anywhere

Sometimes I’d want to log something in the worker’s outermost scope–e.g. the bundle itself–and still use HTTP logging without access to waitUntil. Other times my logging service would be unavailable. Getting past these two cases was the flexibility I needed to use this code throughout the project and know I’d be getting information on a best-effort basis.

To write this, we can use Promise.resolve to mock the missing component and short-circuit the asynchronous HTTP delay while still getting some messages. If our origin is slow enough, we’ll get HTTP logs. For my application this was the way to go.

Similarly, we can use Promise.resolve to forgo HTTP logging.

// Logger
function LogV4(waitCallback = Promise.resolve.bind(Promise)) {
  return function logWithWaitCallback(...strings) {
    return waitCallback(
      teeV1(strings)
    );
  };
}

// Log to HTTP log service
function postToLogServiceV2(...strings) {
  if (LOG_SERVICE_URL) {
    const body = strings.join('\n');
    return fetch(LOG_SERVICE_URL, {
      body,
      method: 'POST',
    });
  }
  // unblock waitCallback
  return Promise.resolve()
}

Optional waiting, optional HTTP, pretty great for casual use. Next we’ll turn our attention to production.

3/3: Adding deadlines for production workers

A slow log service could also slow down our worker as it dutifully waits for the logging service to acknowledge messages. Let’s cap that request duration with a timeout.

Here’s a generic wrapper that takes a promise and will resolve after the defined timeout if the task isn’t complete. We can nest that inside our main logger function.

function raceTimeout(task, timeoutMillis = 500) {
  return Promise.race([
    task,
    new Promise((resolve) => {
      const timeout = setTimeout(() => {
        console.error('deadline reached');
        resolve();
      }, timeoutMillis);
      // clean up if http message received
      task.then(() => clearTimeout(timeout));
    });
  ])
}

function LogV4(waitCallback = Promise.resolve.bind(Promise)) {
  return function logWithWaitCallback(...strings) {
    return waitCallback(
      raceTimeout(teeV1(strings), 300)
    );
  };
}

How to collect and view logs?

I didn’t intend to write about log collection, but in brief: I wrote a simplistic log receiver to run in Kubernetes and posted it here: http-stdout. To view incoming messages from http-stdout as it runs I use a tool called stern that handles merging output from all pods in Kubernetes service.

Summary

It’s lot of code just for console logs, but it’s worked surprisingly well for months. Most problems I’ve encountered are addressable by adjusting message content rather than changing the message delivery code.

Here we have the complete code:

addEventListener("fetch", event => {
  const log = Log(event.waitUntil.bind(event))
  log('hello') // -> http, stdout
})

const emergencyLogger = Log()
log('Is webpack set up right?') // -> http?, stdout

// Log and wait, but not too long
function Log(waitCallback = Promise.resolve.bind(Promise)) {
  return function logWithWaitCallback(...strings) {
    return waitCallback(
      raceTimeout(log(strings), 300)
    );
  };
}

// Log the same thing to HTTP and the console
function log(a) {
  console.log.apply(null, a);
  return postToLogService.apply(null, a);
}

// Log to HTTP log service, if defined
function postToLogService(...strings) {
  if (LOG_SERVICE_URL) {
    const body = strings.join('\n');
    return fetch(LOG_SERVICE_URL, {
      body,
      method: 'POST',
    });
  }
  // unblock waitCallback
  return Promise.resolve()
}

function raceTimeout(task, timeoutMillis = 500) {
  return Promise.race([
    task,
    new Promise((resolve) => {
      const timeout = setTimeout(() => {
        console.error('deadline reached');
        resolve();
      }, timeoutMillis);
      // clean up if message received
      task.then(() => clearTimeout(timeout));
    });
  ])
}

  1. As of spring 2020, the Workers Quick Edit UI has as secret: Everything runs outside Cloudflare’s edge network (where deployed functions run). This closed loop displays console messages, but other Cloudflare products don’t interact. Caching is one of them. The story should change in 2020: progress with wrangler tail and workers dev continues and in time I hope it’ll make this note irrelevant. ↩︎