All blog posts

Best Practices for Setting Up Next.js Background Jobs

Webrunner is a platform that simplifies the deployment of background workers. By linking your GitHub repository to Webrunner, you can seamlessly deploy your background worker with minimal or even no-code effort.

Background jobs play a pivotal role in efficiently handling various tasks without requiring direct user interaction. These tasks may include batch jobs, CPU and I/O-intensive processes, and long-running workflows. By offloading such tasks to background jobs, applications can enhance availability, reduce response times, and optimize resource usage, resulting in an uninterrupted user experience. Users can continue to interact with the application while essential processes like image thumbnail generation or order processing take place behind the scenes. This blog post outlines the best practices for setting up a serverless background job workflow for Next.js apps, covering scheduler setup, task queue implementation, monitoring, and background worker source code management.

Where to host the background worker

Services like Webrunner allow up to 24 hour max timeouts for background processes

Although Vercel is an excellent option for hosting front-end applications and serverless functions, it may not be the best fit for long-running background jobs due to its 60-second timeouts (10 seconds for Hobby projects). If your background tasks require more time to complete, you might need to explore other alternatives.

The top cloud providers (AWS, GCP, Azure) have longer maximum timeouts for their serverless functions. For example, AWS Lambda offer a maximum timeout of 15 minutes, while Google Cloud Platform (GCP) Cloud Run provides a maximum timeout of 60 minutes. Here's a high-level outline of how you can set up a background worker independently on one of these platforms:

  1. Dockerize your background worker: First, you'll need to containerize your background worker using Docker. This involves creating a Dockerfile that specifies the environment and dependencies required to run your worker.
  2. Build and push your Docker image: Once you have the Dockerfile ready, you'll build the Docker image and push it to a container registry. Container registries are repositories for storing and managing container images. Set up a CI pipeline to automatically rebuild your Docker image whenever changes are pushed to your repo. CI platforms like Jenkins, CircleCI, or GitHub Actions can be configured to build and test your Docker image in response to code changes.
  3. Set up the background worker on a cloud provider: Once your Docker image is in the container registry, you can deploy and run it on your preferred cloud provider's platform. The exact steps may vary depending on the cloud provider you choose, but the general process involves creating a service or an instance and specifying the Docker image to use.

Check out our guide on deploying a Next.js background worker to GCP Cloud run.

Source code management

Share the same Next.js source code repo for frontend and background processes with Webrunner

When setting up a background worker for your application, one crucial decision you'll face is whether to use shared code (the same Next.js app) or maintain a separate codebase for the background worker. We recommend using shared code. This approach offers several advantages, promoting efficiency and collaboration:

  1. Code reusability and consistency: By having a shared codebase for the background worker and main app, you can leverage existing utilities, functions, and configurations across both components. This reduces duplication efforts and enforces a uniform coding standard, leading to faster development cycles and efficient resource utilization.
  2. Simplified deployment and maintenance: A shared code approach simplifies the deployment process, making it easier to manage and maintain the application as a whole. Updates and improvements to shared components automatically benefit both the main app and the background worker, eliminating redundant efforts and streamlining the CI/CD pipeline.
  3. Improved team collaboration: Effective team collaboration is crucial for success. A shared codebase fosters better communication and coordination among different teams, encouraging a cohesive development ecosystem. This collaborative environment promotes knowledge sharing, skill enhancement, and cross-functional expertise

Scheduling background jobs

schedule background jobs with cron expressions

Background jobs can be triggered by either a scheduler or user action. Schedulers are particularly useful when tasks need to be performed at regular intervals or on a time-based schedule. These actions are typically used for tasks that involve iterating over all user accounts or data. For example, you might have a background job that performs a daily backup of user data or sends out weekly email reports.

Vercel cron jobs provide a convenient way of defining schedules directly within your version controlled repository. However, there is again a maximum runtime limitation of 60 seconds on Vercel. In comparison, cloud providers such as AWS and GCP offer longer run times for jobs. Setting up scheduled jobs on these platforms involves navigating through admin dashboards or setting Infrastructure as a Service (IaaS) like Terraform.

Managing a task queue

Using a task queue such as GCP Pub/Sub or AWS Simple Queue Service (SQS) is a more efficient way to manage background jobs compared to spinning up new invocations for each task. When a user initiates an action that requires background processing, the application can push the task to the queue. The background job workers then consume tasks from the queue and execute them independently. This decoupling of task execution from user interactions ensures smoother and more scalable performance, especially during peak loads. It also allows for better fault tolerance, as tasks can be retried in case of failures.

To set up a task queue for background job processing, follow these general steps:

  1. Provision and configure the queue: Once you have chosen which cloud provider to use, create a new queue and configure it as per your requirements. Set up any access controls or permissions to ensure secure communication between your application and the queue.
  2. Integrate task queue in your application: Modify your application code to include logic for pushing tasks to the queue. Whenever a user action triggers a background task, instead of processing it directly, enqueue the task into the task queue. This can usually be done by calling an API provided by the task queue service.
  3. Configuring background workers: Ensure that your background workers can consume and process tasks from the queue. Implement mechanisms for checking out and acknowledging tasks to prevent duplicate runs, handle errors, and manage retries effectively.

If you're setting up a queue with cloud providers, you may want to consider using IaaS tools like Terraform to commit this configuration into version control. This enables better version tracking and facilitates seamless replication across multiple environments.

Monitoring background jobs

Webrunner monitoring is essential to keep track of the status, outcome, and logs of background jobs.

Monitoring is essential to keep track of the status, outcome, and logs of background jobs. For example, you might want to know if a specific job succeeded or failed, how much time it took to execute, or whether there were warning messages in its logs. Both Google Cloud and AWS offer built-in monitoring solutions. In Google Cloud, you can use Stackdriver Logging to automatically log Cloud Function activity. In AWS, Lambda automatically logs function activity, and you can access these logs through CloudWatch. Additionally, you can implement custom logging within your background job code to capture specific details.

In addition to monitoring logs for internal debugging, you may want to provide transparency to your users by displaying the status of background jobs on the frontend. This can be particularly helpful for tasks that directly impact the user experience, such as processing user-generated content or handling important transactions. To achieve this, you'll want to consider the following steps in broad stokes:

  1. Persist job details in a database: When a user triggers a background job, store all meta-data related to the job in a database. This meta-data could include a unique identifier for the job, its current status (e.g., queued), any relevant parameters or task-specific data, and a timestamp to track the job's creation time.
  2. Real-time job status updates: As background jobs are picked up by the processing workers and start execution, you can update their status from "queued" to "processing" in the database. By doing so, the frontend can access and display the latest information on the job's progress in real-time. Additionally, incorporate error exception handling so that the job status can be updated if the worker encounters an unexpected error.

How Webrunner simplifies background jobs

Webrunner is a platform that simplifies the deployment of background workers. By linking your GitHub repository to Webrunner, you can seamlessly deploy your background worker with minimal or even no-code effort. Here's how easy it is to get started:

  1. Sign up for Webrunner and link your GitHub account. This enables Webrunner to listen to code changes and deploy background workers whenever a new version of the app is pushed.
  2. Define scheduled jobs. You can either define scheduled jobs in a json file in your source controlled repo or with Webrunner's simple UI.
  3. Kick off background jobs. You can initiate background jobs by calling the Webrunner API in response to user actions.

With Webrunner, you gain a host of powerful features that simplify background job management. For instance, it offers a maximum timeout of 24 hours - longer than most other options - allowing your background tasks to run more extensive and long-lived operations without any hassle.

Moreover, Webrunner works seamlessly with the same repository that powers your Next.js app. This means you can conveniently manage and coordinate both frontend, backend, and background tasks within a single codebase, without requiring any additional code to deploy tasks on Webrunner alongside your current Next.js repo.

Defining scheduled background jobs is made easy with Webrunner. You have the flexibility to use a user-friendly UI or a JSON configuration file in your source code to specify and manage scheduled jobs. This enables version-controlled scheduled jobs and allows you to take advantage of longer run times, extending up to 24 hours.

Additionally, Webrunner handles the task queue setup for you. Once your Next.js app is deployed on Webrunner, you can effortlessly access your Next.js endpoints through straightforward URLs like https://run.webrunner.io/{appId}/api/your/Endpoint. By utilizing the Webrunner API, you can query the status of each task, providing real-time insights to your end users, allowing them to stay informed and updated on task progress.

Learn more about Webrunner and try it out today at webrunner.io.

All blog posts
Cookie Settings
This website uses cookies

Cookie Settings

We use cookies to improve user experience. Choose what cookie categories you allow us to use. You can read more about our Cookie Policy by clicking on Cookie Policy below.

These cookies enable strictly necessary cookies for security, language support and verification of identity. These cookies can’t be disabled.

These cookies collect data to remember choices users make to improve and give a better user experience. Disabling can cause some parts of the site to not work properly.

These cookies help us to understand how visitors interact with our website, help us measure and analyze traffic to improve our service.

These cookies help us to better deliver marketing content and customized ads.