|1.||pgq must be installed both in source and target databases. See pgqadm man page for details. Target database must also have pgq_ext schema installed.|
2. edit a queue_splitter configuration file, say queue_splitter_sourcedb_sourceq_targetdb.ini
3. create source and target queues
$ pgqadm.py ticker.ini create <queue>
4. launch queue splitter in daemon mode
$ queue_splitter3 queue_splitter_sourcedb_sourceq_targetdb.ini -d
5. start producing and consuming events
job_nameName for particulat job the script does. Script will log under this name to logdb/logserver. The name is also used as default for PgQ consumer name. It should be unique.
pidfileLocation for pid file. If not given, script is disallowed to daemonize.
logfileLocation for log file.
loop_delayIf continuisly running process, how long to sleep after each work loop, in seconds. Default: 1.
connection_lifetimeClose and reconnect older database connections.
queue_nameQueue name to attach to. No default.
consumer_nameConsumers ID to use when registering. Default: %(job_name)s
[queue_splitter3] job_name = queue_spliter_sourcedb_sourceq_targetdb
src_db = dbname=sourcedb dst_db = dbname=targetdb
pgq_queue_name = sourceq
logfile = ~/log/%(job_name)s.log pidfile = ~/pid/%(job_name)s.pid
Following switches are common to all skytools.DBScript-based Python programs.
-h, --helpshow help message and exit
-q, --quietmake program silent
-v, --verbosemake program more verbose
-d, --daemonmake program go background
--inishow commented template config file.
Following switches are used to control already running process. The pidfile is read from config then signal is sent to process id specified there.
-r, --reloadreload config (send SIGHUP)
-s, --stopstop program safely (send SIGINT)
-k, --killkill program immidiately (send SIGTERM)
How to to process events created in secondary database with several queues but have only one queue in primary database. This also shows how to insert events into queues with regular SQL easily.
CREATE SCHEMA queue; CREATE TABLE queue.event1 ( -- this should correspond to event internal structure -- here you can put checks that correct data is put into queue id int4, name text, -- not needed, but good to have: primary key (id) ); -- put data into queue in urlencoded format, skip actual insert CREATE TRIGGER redirect_queue1_trg BEFORE INSERT ON queue.event1 FOR EACH ROW EXECUTE PROCEDURE pgq.logutriga(singlequeue, SKIP); -- repeat the above for event2
-- now the data can be inserted: INSERT INTO queue.event1 (id, name) VALUES (1, user);
If the queue_splitter is put on "singlequeue", it spreads the event on target to queues named "queue.event1", "queue.event2", etc. This keeps PgQ load on primary database minimal both CPU-wise and maintenance-wise.