Implementing an async caching strategy in a Laravel application is essential when you need to deliver data stored in a way that is not quickly accessible.
Modern web applications, especially those handling vast amounts of data and experiencing high user traffic, frequently encounter performance issues. These issues primarily stem from database queries taking too long to execute, leading to slower page loads, diminished user experiences, and, ultimately, a loss of users and revenue.
Imagine a scenario where your application’s users are waiting for what feels like an eternity for pages to load because the underlying database queries are overwhelmed. This isn’t just a minor inconvenience. Slow response times can frustrate users, losing their confidence in your service. The situation worsens during peak traffic periods, where the increased load can cause significant slowdowns or even downtime. As developers and business owners strive to optimize performance, they often find themselves at a crossroads, needing to balance the immediacy of data access with the practical limitations of database processing power.
What if the database is slow? Can I build an app so that a user never has to wait on the database again? This is a common question that our engineers at Sweetwater are facing every day. With millions of database queries a day, you can imagine the load our databases take.
We leverage the Laravel Cache::remember() method quite a bit. If you are unfamiliar with how it works, it’s simple. It takes 3 arguments. A $key
, $ttl
, and $callback
. The $key
parameter is the key that the cached value is stored in. The $ttl
parameter is the how long the key should be valid for. For example, a ttl of 30 would mean that the cache value is valid for 30 seconds. Finally, the $callback
parameter. If your cache key expires, this is the function that is called to cache the value again.
Now, without the wonderful Cache::remember
method, we’d have to do something like the following to get and cache a list of users:
if (Cache::has('all-users')) {
$allUsers = Cache::get('all-users');
} else {
$allUsers = User::get();
Cache::put('all-users', $allUsers, 30);
}
The above code could easily be replaced with the following:
$allUsers = Cache::remember('all-users', 30, function() {
return User::get();
});
So what’s the problem with this? If the all-users
cache key expired, we would call User::get()
. What if that User::get()
method call took 45 seconds to resolve? Users would see that delay, and if the app is getting a lot of traffic, the site could go down due to hanging requests. What if we could call that User::get()
method in a background worker?
Enter my proposal: Cache::rememberAsync()
. This method would be called similarly to the Cache::remember
method from before. However, when the cache key expires, it would call the $callback
in a background process with a queue worker. At the time of a cache key expiring, we would return the old value and refresh it in the background. What would this code look like?
function rememberAsync($key, $ttl, $callback, $queue = "default") {
$currentValue = Cache::get($key);
if (!is_null($currentValue)) {
return $currentValue;
}
$fallbackKey = "{$key}/fallback";
dispatch(function () use ($key, $callback, $ttl, $fallbackKey) {
Cache::put($key, $callback(), $ttl);
Cache::forever($fallbackKey, $callback());
})->onQueue($queue);
if ($fallbackValue = Cache::get($fallbackKey)) {
return $fallbackValue;
}
return null;
}
Let me break down what this function is doing:
dispatch
a job that runs the $callback
, caching it’s value in the cache key plus an extra fallback cache key$callback
in the queue worker processIn closing, this way of caching is very beneficial in two main ways:
A quick note: This technique should be used on data that doesn’t absolutely have to be real time. If your app depends on data that is up to date by the second, you may research other caching techniques.
To make a piece of code async (run in parallel to other parts of your code), you may use Laravel Queues to dispatch jobs via a callback or job class.
Laravel Cache references a way to store pieces of data in an quickly accessible way to increase performance in an application or API. Common use-cases for caching are storing the results of a database query, or storing the result of an external API call.
Scaling a Laravel application to handle thousands of requests per second requires a combination of architectural, caching, and infrastructure strategies. Here are five effective techniques to achieve this level of scalability:
Horizontal scaling involves adding more servers or instances to your pool to handle increased load, as opposed to vertical scaling (upgrading the existing servers to have more power). Use a load balancer to distribute incoming requests evenly across these instances. This approach helps in managing a large number of concurrent requests by ensuring no single server becomes a bottleneck.
Move time-consuming tasks like sending emails, generating reports, or processing images to background jobs using Laravel’s queue system. This allows the main application processes to respond to user requests more quickly. Laravel supports various queue backends like Redis, Amazon SQS, and database queues, enabling asynchronous processing and improving the overall user experience by freeing up resources for the main tasks.
Efficient database use is critical for scaling. Optimize your database queries and structures to reduce the load on your database servers. Ensure proper indexing of tables to speed up query times. Consider using Laravel’s Eloquent ORM efficiently to avoid N+1 query problems by eager loading relationships only when necessary. Additionally, using a read-replica database can offload read queries from the main database, thereby improving performance.
Caching is crucial for reducing database load and speeding up request processing. Use Laravel’s caching system to store frequently accessed data in memory. Techniques like cache busting, tagging, and hierarchical cache can help manage cache effectively. Implementing page caching for static pages or parts of pages can also drastically reduce the load on your application servers.
Offload the delivery of static assets (images, CSS, JavaScript) to a CDN. This reduces the load on your application servers and speeds up content delivery by serving assets from a location closest to the user. A CDN also helps in handling spikes in traffic by distributing the load across its global network of servers.