Huey’s API

Most end-users will interact with the API using the two decorators:

The API documentation will follow the structure of the huey API, starting with the highest-level interfaces (the decorators) and eventually discussing the lowest-level interfaces, the BaseQueue and BaseDataStore objects.

Function decorators and helpers

class Huey(queue[, result_store=None[, schedule=None[, events=None[, store_none=False[, always_eager=False]]]]])

Huey executes tasks by exposing function decorators that cause the function call to be enqueued for execution by the consumer.

Typically your application will only need one Huey instance, but you can have as many as you like – the only caveat is that one consumer process must be executed for each Huey instance.

Parameters:
  • queue – a queue instance, e.g. RedisQueue.
  • result_store – a place to store results and the task schedule, e.g. RedisDataStore.
  • schedule – scheduler implementation, e.g. an instance of RedisSchedule.
  • events – event emitter implementation, e.g. an instance of RedisEventEmitter.
  • store_none (boolean) – Flag to indicate whether tasks that return None should store their results in the result store.
  • always_eager – Useful for testing, this will execute all tasks immediately, without enqueueing them.

Example usage:

from huey.api import Huey, crontab
from huey.backends.redis_backend import RedisBlockingQueue, RedisDataStore,\
    RedisSchedule

huey = RedisHuey('my-app')

# THIS IS EQUIVALENT TO ABOVE CODE:
#queue = RedisBlockingQueue('my-app')
#result_store = RedisDataStore('my-app')
#schedule = RedisSchedule('my-app')
#huey = Huey(queue, result_store, schedule)

@huey.task()
def slow_function(some_arg):
    # ... do something ...
    return some_arg

@huey.periodic_task(crontab(minute='0', hour='3'))
def backup():
    # do a backup every day at 3am
    return
task([retries=0[, retry_delay=0[, retries_as_argument=False[, include_task=False]]]])

Function decorator that marks the decorated function for processing by the consumer. Calls to the decorated function will do the following:

  1. Serialize the function call into a message suitable for storing in the queue
  2. Enqueue the message for execution by the consumer
  3. If a result_store has been configured, return an AsyncData instance which can retrieve the result of the function, or None if not using a result store.

Note

Huey can be configured to execute the function immediately by instantiating it with always_eager = True – this is useful for running in debug mode or when you do not wish to run the consumer.

Here is how you might use the task decorator:

# assume that we've created a huey object
from huey import RedisHuey

huey = RedisHuey()

@huey.task()
def count_some_beans(num):
    # do some counting!
    return 'Counted %s beans' % num

Now, whenever you call this function in your application, the actual processing will occur when the consumer dequeues the message and your application will continue along on its way.

Without a result store:

>>> res = count_some_beans(1000000)
>>> res is None
True

With a result store:

>>> res = count_some_beans(1000000)
>>> res
<huey.api.AsyncData object at 0xb7471a4c>
>>> res.get()
'Counted 1000000 beans'
Parameters:
  • retries (int) – number of times to retry the task if an exception occurs
  • retry_delay (int) – number of seconds to wait between retries
  • retries_as_argument (boolean) – whether the number of retries should be passed in to the decorated function as an argument.
  • include_task (boolean) – whether the task instance itself should be passed in to the decorated function as the task argument.
Return type:

decorated function

The return value of any calls to the decorated function depends on whether the Huey instance is configured with a result_store. If a result store is configured, the decorated function will return an AsyncData object which can fetch the result of the call from the result store – otherwise it will simply return None.

The task decorator also does one other important thing – it adds a special function onto the decorated function, which makes it possible to schedule the execution for a certain time in the future:

{decorated func}.schedule(args=None, kwargs=None, eta=None, delay=None, convert_utc=True)

Use the special schedule function to schedule the execution of a queue task for a given time in the future:

import datetime

# get a datetime object representing one hour in the future
in_an_hour = datetime.datetime.now() + datetime.timedelta(seconds=3600)

# schedule "count_some_beans" to run in an hour
count_some_beans.schedule(args=(100000,), eta=in_an_hour)

# another way of doing the same thing...
count_some_beans.schedule(args=(100000,), delay=(60 * 60))
Parameters:
  • args – arguments to call the decorated function with
  • kwargs – keyword arguments to call the decorated function with
  • eta (datetime) – the time at which the function should be executed
  • delay (int) – number of seconds to wait before executing function
  • convert_utc – whether the eta should be converted from local time to UTC, defaults to True
Return type:

like calls to the decorated function, will return an AsyncData object if a result store is configured, otherwise returns None

{decorated func}.call_local

Call the @task-decorated function without enqueueing the call. Or, in other words, call_local() provides access to the actual function.

>>> count_some_beans.call_local(1337)
'Counted 1337 beans'
{decorated func}.task_class

Store a reference to the task class for the decorated function.

>>> count_some_beans.task_class
tasks.queuecmd_count_beans
periodic_task(validate_datetime)

Function decorator that marks the decorated function for processing by the consumer at a specific interval. Calls to functions decorated with periodic_task will execute normally, unlike task(), which enqueues tasks for execution by the consumer. Rather, the periodic_task decorator serves to mark a function as needing to be executed periodically by the consumer.

Note

By default, the consumer will execute periodic_task functions. To disable this, run the consumer with -n or --no-periodic.

The validate_datetime parameter is a function which accepts a datetime object and returns a boolean value whether or not the decorated function should execute at that time or not. The consumer will send a datetime to the function every minute, giving it the same granularity as the linux crontab, which it was designed to mimic.

For simplicity, there is a special function crontab(), which can be used to quickly specify intervals at which a function should execute. It is described below.

Here is an example of how you might use the periodic_task decorator and the crontab helper:

from huey import crontab
from huey import RedisHuey

huey = RedisHuey()

@huey.periodic_task(crontab(minute='*/5'))
def every_five_minutes():
    # this function gets executed every 5 minutes by the consumer
    print "It's been five minutes"

Note

Because functions decorated with periodic_task are meant to be executed at intervals in isolation, they should not take any required parameters nor should they be expected to return a meaningful value. This is the same regardless of whether or not you are using a result store.

Parameters:validate_datetime – a callable which takes a datetime and returns a boolean whether the decorated function should execute at that time or not
Return type:decorated function

Like task(), the periodic task decorator adds several helpers to the decorated function. These helpers allow you to “revoke” and “restore” the periodic task, effectively enabling you to pause it or prevent its execution.

{decorated_func}.revoke([revoke_until=None[, revoke_once=False]])

Prevent the given periodic task from executing. When no parameters are provided the function will not execute again.

This function can be called multiple times, but each call will overwrite the limitations of the previous.

Parameters:
  • revoke_until (datetime) – Prevent the execution of the task until the given datetime. If None it will prevent execution indefinitely.
  • revoke_once (bool) – If True will only prevent execution the next time it would normally execute.
# skip the next execution
every_five_minutes.revoke(revoke_once=True)

# pause the command indefinitely
every_five_minutes.revoke()

# pause the command for 24 hours
every_five_minutes.revoke(datetime.datetime.now() + datetime.timedelta(days=1))
{decorated_func}.is_revoked([dt=None])

Check whether the given periodic task is revoked. If dt is specified, it will check if the task is revoked for the given datetime.

Parameters:dt (datetime) – If provided, checks whether task is revoked at the given datetime
{decorated_func}.restore()

Clears any revoked status and run the task normally

If you want access to the underlying task class, it is stored as an attribute on the decorated function:

{decorated_func}.task_class

Store a reference to the task class for the decorated function.

crontab(month='*', day='*', day_of_week='*', hour='*', minute='*')

Convert a “crontab”-style set of parameters into a test function that will return True when a given datetime matches the parameters set forth in the crontab.

Acceptable inputs:

  • “*” = every distinct value
  • “*/n” = run every “n” times, i.e. hours=’*/4’ == 0, 4, 8, 12, 16, 20
  • “m-n” = run every time m..n
  • “m,n” = run on m and n
Return type:a test function that takes a datetime and returns a boolean

AsyncData

class AsyncData(huey, task)

Although you will probably never instantiate an AsyncData object yourself, they are returned by any calls to task() decorated functions (provided that “huey” is configured with a result store). The AsyncData talks to the result store and is responsible for fetching results from tasks. Once the consumer finishes executing a task, the return value is placed in the result store, allowing the producer to retrieve it.

Working with the AsyncData class is very simple:

>>> from main import count_some_beans
>>> res = count_some_beans(100)
>>> res  # what is "res" ?
<huey.queue.AsyncData object at 0xb7471a4c>

>>> res.get()  # get the result of this task, assuming it executed
'Counted 100 beans'

What happens when data isn’t available yet? Let’s assume the next call takes about a minute to calculate:

>>> res = count_some_beans(10000000) # let's pretend this is slow
>>> res.get()  # data is not ready, so returns None

>>> res.get() is None  # data still not ready
True

>>> res.get(blocking=True, timeout=5)  # block for 5 seconds
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/charles/tmp/huey/src/huey/huey/queue.py", line 46, in get
    raise DataStoreTimeout
huey.exceptions.DataStoreTimeout

>>> res.get(blocking=True)  # no timeout, will block until it gets data
'Counted 10000000 beans'
get([blocking=False[, timeout=None[, backoff=1.15[, max_delay=1.0[, revoke_on_timeout=False]]]]])

Attempt to retrieve the return value of a task. By default, it will simply ask for the value, returning None if it is not ready yet. If you want to wait for a value, you can specify blocking = True – this will loop, backing off up to the provided max_delay until the value is ready or until the timeout is reached. If the timeout is reached before the result is ready, a DataStoreTimeout exception will be raised.

Parameters:
  • blocking – boolean, whether to block while waiting for task result
  • timeout – number of seconds to block for (used with blocking=True)
  • backoff – amount to backoff delay each time no result is found
  • max_delay – maximum amount of time to wait between iterations when attempting to fetch result.
  • revoke_on_timeout (bool) – if a timeout occurs, revoke the task
revoke()

Revoke the given task. Unless it is in the process of executing, it will be revoked and the task will not run.

in_an_hour = datetime.datetime.now() + datetime.timedelta(seconds=3600)

# run this command in an hour
res = count_some_beans.schedule(args=(100000,), eta=in_an_hour)

# oh shoot, I changed my mind, do not run it after all
res.revoke()
restore()

Restore the given task. Unless it has already been skipped over, it will be restored and run as scheduled.

Queues and DataStores

Huey communicates with two types of data stores – queues and datastores. Thinking of them as python datatypes, a queue is sort of like a list and a datastore is sort of like a dict. Queues are FIFOs that store tasks – producers put tasks in on one end and the consumer reads and executes tasks from the other. DataStores are key-based stores that can store arbitrary results of tasks keyed by task id. DataStores can also be used to serialize task schedules so in the event your consumer goes down you can bring it back up and not lose any tasks that had been scheduled.

Huey, like just about a zillion other projects, uses a “pluggable backend” approach, where the interface is defined on a couple classes BaseQueue and BaseDataStore, and you can write an implementation for any datastore you like. The project ships with backends that talk to redis, a fast key-based datastore, but the sky’s the limit when it comes to what you want to interface with. Below is an outline of the methods that must be implemented on each class.

Base classes

class BaseQueue(name, **connection)

Queue implementation – any connections that must be made should be created when instantiating this class.

Parameters:
  • name – A string representation of the name for this queue
  • connection – Connection parameters for the queue
blocking = False

Whether the backend blocks when waiting for new results. If set to False, the backend will be polled at intervals, if True it will read and wait.

write(data)

Write data to the queue - has no return value.

Parameters:data – a string
read()

Read data from the queue, returning None if no data is available – an empty queue should not raise an Exception!

Return type:a string message or None if no data is present
remove(data)

Remove all instances of given data from queue, returning number removed

Parameters:data (string) –
Return type:number of instances removed
flush()

Optional: Delete everything in the queue – used by tests

__len__()

Optional: Return the number of items in the queue – used by tests

class BaseDataStore(name, **connection)

Data store implementation – any connections that must be made should be created when instantiating this class.

Parameters:
  • name – A string representation of the name for this data store
  • connection – Connection parameters for the data store
put(key, value)

Store the value using the key as the identifier

peek(key)

Retrieve the value stored at the given key, returns a special value EmptyData if nothing exists at the given key.

get(key)

Retrieve the value stored at the given key, returns a special value EmptyData if no data exists at the given key. This is to differentiate between “no data” and a stored None value.

Warning

After a result is fetched it will be removed from the store!

flush()

Remove all keys

class BaseSchedule(name, **connection)

Schedule tasks, should be able to efficiently find tasks that are ready for execution.

add(data, timestamp)

Add the timestamped data (a serialized task) to the task schedule.

read(timestamp)

Return all tasks that are ready for execution at the given timestamp.

flush()

Remove all tasks from the schedule.

class BaseEventEmitter(channel, **connection)

A send-and-forget event emitter that is used for sending real-time updates for tasks in the consumer.

emit(data)

Send the data on the specified channel.

Redis implementation

All the following use the python redis driver written by Andy McCurdy.

class RedisQueue(name, **connection)

Does a simple RPOP to pull messages from the queue, meaning that it polls.

Parameters:
  • name – the name of the queue to use
  • connection – a list of values passed directly into the redis.Redis class
class RedisBlockingQueue(name, read_timeout=None, **connection)

Does a BRPOP to pull messages from the queue, meaning that it blocks on reads. By default Huey will block forever waiting for a message, but if you want, you can specify a timeout in seconds. This may prevent the consumer from getting hung waiting on tasks in the event of network disruptions or similar quirks.

Parameters:
  • name – the name of the queue to use
  • read_timeout (int) – limit blocking pop to read_timeout seconds.
  • connection – a list of values passed directly into the redis.Redis class
class RedisDataStore(name, **connection)

Stores results in a redis hash using HSET, HGET and HDEL

Parameters:
  • name – the name of the data store to use
  • connection – a list of values passed directly into the redis.Redis class
class RedisSchedule(name, **connection)

Uses sorted sets to efficiently manage a schedule of timestamped tasks.

param name:the name of the data store to use
param connection:
 a list of values passed directly into the redis.Redis class
class RedisEventEmitter(channel, **connection)

Uses Redis pubsub to emit json-serialized updates about tasks in real-time.

Parameters:
  • channel – the channel to send messages on.
  • connection – values passed directly to the redis.Redis class.