GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages
QUEUE_MOVER3(1)   QUEUE_MOVER3(1)

queue_mover3 - PgQ consumer that copies data from one queue to another.

queue_mover3 [switches] config.ini

queue_mover is PgQ consumer that transports events from source queue into target queue. One use case is when events are produced in several databases then queue_mover is used to consolidate these events into single queue that can then be processed by consumers who need to handle theses events. For example in case of patitioned databases it’s convenient to move events from each partition into one central queue database and then process them there. That way configuration and dependancies of partiton databases are simpler and more robust. Another use case is to move events from OLTP database to batch processing server.
Transactionality: events will be inserted as one transaction on target side. That means only batch_id needs to be tracked on target side.

Basic PgQ setup and usage can be summarized by the following steps:
1.PgQ must be installed both in source and target databases. See pgqadm man page for details.
2.Target database must also have pgq_ext schema installed. It is used to keep sync between two databases.
3.Create a queue_mover configuration file, say qmover_sourceq_to_targetdb.ini
4.create source and target queues
$ pgqadm.py sourcedb_ticker.ini create <srcqueue>
$ pgqadm.py targetdb_ticker.ini create <dstqueue>
5.launch queue mover in daemon mode
$ queue_mover3 -d qmover_sourceq_to_targetdb.ini
6.start producing and consuming events

job_name
Name for particulat job the script does. Script will log under this name to logdb/logserver. The name is also used as default for PgQ consumer name. It should be unique.
pidfile
Location for pid file. If not given, script is disallowed to daemonize.
logfile
Location for log file.
loop_delay
If continuisly running process, how long to sleep after each work loop, in seconds. Default: 1.
connection_lifetime
Close and reconnect older database connections.
use_skylog
foo.

queue_name
Queue name to attach to. No default.
consumer_name
Consumers ID to use when registering. Default: %(job_name)s

src_db
Source database.
dst_db
Target database.
dst_queue_name
Target queue name.

[queue_mover3]
job_name = eventlog_to_target_mover
src_db = dbname=sourcedb
dst_db = dbname=targetdb
pgq_queue_name = eventlog
dst_queue_name = copy_of_eventlog
pidfile = log/%(job_name)s.pid
logfile = pid/%(job_name)s.log

Following switches are common to all skytools.DBScript-based Python programs.
-h, --help
show help message and exit
-q, --quiet
make program silent
-v, --verbose
make program more verbose
-d, --daemon
make program go background
--ini
show commented template config file.
Following switches are used to control already running process. The pidfile is read from config then signal is sent to process id specified there.
-r, --reload
reload config (send SIGHUP)
-s, --stop
stop program safely (send SIGINT)
-k, --kill
kill program immidiately (send SIGTERM)

Event ID is not kept on target side. If needed is can be kept, then event_id seq at target side need to be increased by hand to inform ticker about new events.
04/01/2014  

Search for    or go to Top of page |  Section 1 |  Main Index

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with ManDoc.