GSP
Quick Navigator

Search Site

Unix VPS
A - Starter
B - Basic
C - Preferred
D - Commercial
MPS - Dedicated
Previous VPSs
* Sign Up! *

Support
Contact Us
Online Help
Handbooks
Domain Status
Man Pages

FAQ
Virtual Servers
Pricing
Billing
Technical

Network
Facilities
Connectivity
Topology Map

Miscellaneous
Server Agreement
Year 2038
Credits
 

USA Flag

 

 

Man Pages


Manual Reference Pages  -  OPENXPKI::SERVER::DBI::SQL (3)

.ds Aq ’

Name

OpenXPKI::Server::DBI::SQL

CONTENTS

Description

This module implements the SQL interface of the database interface. It implements basic functions which accept hashes with parameters for the SQL operations.

Functions

    General Functions

new

this is the constructor. Only an instance of OpenXPKI::Server::DBI::DBH is expected in the parameter DBH.

    Directly Mapped Functions

get_new_serial

is directly mapped to OpenXPKI::Server::DBI::DBH->get_new_serial

    Functions which implement database initialization

table_exists

checks if the specified table exists. The parameter for the table name is NAME.

create_table

creates a table which was specified with the parameter NAME. If DRYRUN is the value of MODE then the function returns the SQL commands.

drop_table

drops the table which was specified with the parameter NAME. If DRYRUN is the value of MODE then the function returns the SQL commands. MODE must be FORCE, otherwise this method will throw an exception.

create_index

creates an index which was specified with the parameter NAME. If DRYRUN is the value of MODE then the function returns the SQL commands.

drop_index

drops the index which was specified with the parameter NAME. If DRYRUN is the value of MODE then the function returns the SQL commands. MODE must be FORCE, otherwise this method will throw an exception.

    Functions which implement SQL commands

insert

expects TABLE and DATA. DATA is a hash reference which includes the names and values of the used columns of the table. A column is NULL if the column is not present in the hash.

update

expects TABLE, WHERE and DATA. DATA is a hash reference which includes the names and values of the used columns of the table. A column is NULL if the column is not present in the hash. WHERE is a hash reference which includes the parameters for the where clause. All parameters are required. General updates are not allowed.

delete

expects TABLE and DATA. DATA is a hash refrence which includes the names and values of the used columns of the table. These columns will be specified in the where clause of the SQL delete command.

There must be at minimum one column with a value in the hash reference. We do not support complete table erasements for security reasons via this interface.

If you need other operators for the columns in the data hash than = then you can specify an array reference where the first element is the operator and the second element is the value. Please note that only simple operators are allowed (<, >, <=, >= and =).

Examples:
o erases CRR 3

$self-<gt>{db}-<gt>delete (TABLE =<gt> CRR,
DATA
=<gt> {CRR_SERIAL => 3});

o erases all CRRs lower than 3

$self-<gt>{db}-<gt>delete (TABLE =<gt> CRR,
DATA
=<gt> {CRR_SERIAL => [<, 3]});

BTW CRRs should never be erased!

update

not implemented

__get_symbolic_column_and_table

Expects a string argument (arg). Returns a two element array containing (arg, undef) if no ’.’ is contained in the string. Returns a two element array containing (first, second) if the string looks like ’first.second’.

get_symbolic_query_columns

Returns a list of symbolic column names for the specified query. If a single table is queried the method returns all table columns. If a join query is specified the method returns symbolic TABLE.COLUMN specifications for this particular query.

select

Select is the most versatile function and has two invocation modes: single table query and natural join.

The method supports the following named static parameters:
o TABLE

Is the table which will be searched. If this is a scalar value a single table is queried. If an array reference is passed, the method tries to construct a join. See below for a discussion on joins.

o KEY

is the serial of the table. See SERIAL for more informations.

o SERIAL

will be mapped to ${TABLE}_SERIAL. Please note that a SERIAL is perhaps not a unique index in a table. Certificates with identical serials can be present in a table if they were issued by different CAs.

o PIVOT_COLUMN

optional, specifies the key column to apply the following filters on. Defaults to ${TABLE}_SERIAL.

o FROM

creates the SQL filter ${FROM} <lt= PIVOT_COLUMN>.

o TO

creates the SQL filter PIVOT_COLUMN <lt= ${FROM}>.

o GREATER_THAN

creates the SQL filter ${GREATER} <lt PIVOT_COLUMN>.

o LESS_THAN

creates the SQL filter PIVOT_COLUMN <lt ${FROM}>.

o BETWEEN

creates the SQL filter PIVOT_COLUMN between ${FROM} and ${TO}. Value is expected to be a 2-element array ref.

o LIMIT

Can either be a number n, which means that only the first n rows are returned, or a hash reference with the keys AMOUNT and START, in which case AMOUNT rows are returned starting at START.

o REVERSE

reverse the ordering of the results.

o VALID_AT

limit search to specified validity (see below).

In addition the function supports all table columns except of the data columns because they are perhaps too large. Many database do not support searching on high volume columns or columns with a flexible length. Dynamic parameters may be specified via a hash reference passed in as the named parameter DYNAMIC. The argument to DYNAMIC is a hash reference which consists from a parameter VALUE und an optional parameter OPERATOR. The parameter VALUE may be a scalar or an hash reference. In the latter case multiple conditions are created that are logically ANDed. The hash value for each key can either be a scalar or an array reference. In the latter case, they are combined by a logical OR.

You can use wildcards inside of text fields like subjects or emailaddresses. If this is the case the you must specify the parameter OPERATOR with the value LIKE. You have to ensure that % is used as wildcard. This module expects SQL ready wildcards. It always binds parameters to queries so that SQL injection is impossible.

Joins

In order to issue compound queries across multiple tables it is possible to call select with an array reference contained in the named parameter TABLE. If this is the case the following named parameters are also required:
o COLUMNS

Array reference containing the exact specification of the columns to return. The scalars contained in the array ref should have the form TABLE.COLUMN, with table being one of the tables specified in the TABLES argument.

In the common invocation mode, TABLE is an arrayref containing scalar table names. In this case the join uses these as table names.

Example:



  TABLE => [ foo, bar ]



If you wish to reference one table more than once (e. g. for matching multiple tuples from one single table) you can assign a symbolic name to the table. In this case the TABLE arrayref should contain another arrayref containing two entries, such as follows for the table ’bar’.

Example:



  TABLE => [ foo, [ bar => symbolic ] ]



o JOIN

Array reference containing array references specifying the join condition. The length of the inner arrayref (join condition) must be identical to the number of the TABLEs to join. Each scalar element in the join condition may be either undef (which means that the corresponding table will not be part of the join condition) or a column name in the corresponding table. If the element is defined, an SQL AND statement will be formed between the previous defined element and the current one in order to form the join. It is possible to specify multiple join conditions.

See the example below to get an idea how this is meant to work.

Join example 1



 $result = $dbi->select(
    #          first table second table        third table
    TABLE => [ WORKFLOW, WORKFLOW_CONTEXT, WORKFLOW_HISTORY ],

    # return these columns
    COLUMNS => [ WORKFLOW.WORKFLOW_SERIAL, WORKFLOW_CONTEXT.WORKFLOW_CONTEXT_KEY, WORKFLOW_CONTEXT.WORKFLOW_CONTEXT_VALUE ],

    JOIN => [
    #  on first table     second table       third
    [ WORKFLOW_SERIAL, WORKFLOW_SERIAL, WORKFLOW_SERIAL ],
        # a hypothetical additional join condition only using the columns
        # WORKFLOW_CONTEXT.FOO and WORKFLOW_HISTORY.BAR
        # (just for illustration purposes):
    # [ undef, FOO, BAR ],
    ],
    DYNAMIC => {
    WORKFLOW_HISTORY.WORKFLOW_DESCRIPTION => { VALUE => Added context value somekey-3->somevalue: 100043},
    },
    );



This results in the following query:



 SELECT
    workflow.workflow_id,
    workflow_context.workflow_context_key,
    workflow_context.workflow_context_value
 FROM workflow, workflow_context, workflow_history
 WHERE workflow.workflow_id=workflow_context.workflow_id
   AND workflow_context.workflow_id=workflow_history.workflow_id
   AND workflow_history.workflow_description like ?
 ORDER BY workflow.workflow_id,
   workflow_context.workflow_context_key,
   workflow_context.workflow_context_value



Join example 2



  $result = $dbi->select(
    #          first table second table                          third table
    TABLE => [ WORKFLOW, [ WORKFLOW_CONTEXT => context1 ], [ WORKFLOW_CONTEXT => context2 ] ],

    # return these columns
    COLUMNS => [ WORKFLOW.WORKFLOW_SERIAL, context1.WORKFLOW_CONTEXT_VALUE, context2.WORKFLOW_CONTEXT_VALUE ],

    JOIN => [
    #  on first table     second table       third
    [ WORKFLOW_SERIAL, WORKFLOW_SERIAL, WORKFLOW_SERIAL ],
    ],
    DYNAMIC => {
    context1.WORKFLOW_CONTEXT_KEY   => {VALUE => somekey-5},
    context1.WORKFLOW_CONTEXT_VALUE => {VALUE => somevalue: 100045},
    context2.WORKFLOW_CONTEXT_KEY   => {VALUE => somekey-7},
    context2.WORKFLOW_CONTEXT_VALUE => {VALUE => somevalue: 100047},
    },
    );



This results in the following query:



 SELECT
    workflow.workflow_id,
    context1.workflow_context_value
    context2.workflow_context_value
 FROM workflow, workflow_context as context1, workflow_context as context2
 WHERE workflow.workflow_id=context1.workflow_id
   AND context1.workflow_id=context2.workflow_id
   AND context1.workflow_context_key like ?
   AND context1.workflow_context_value like ?
   AND context2.workflow_context_key like ?
   AND context2.workflow_context_value like ?
 ORDER BY workflow.workflow_id,
   context1.workflow_context_value,
   context2.workflow_context_value



Validity specification for single table queries

Adding the named parameter VALID_AT limits the returned results to entries with a NOTBEFORE and a NOTAFTER date. Depending on if the query is a single-table query or a join, the argument of VALID_AT is interpreted differently.

For single-table queries the argument may either be a single scalar value or a arrayref. Each individual value of these may be either an integer number or a DateTime object.

If an integer value is passed, the value is interpreted as seconds since epoch. As an alternative, it is also possible to pass a DateTime object instead of an epoch value.

Only those entries are returned which match the validity specification.

Examples:



  VALID_AT => time
or
  VALID_AT => DateTime->now



selects entries that are valid now



  VALID_AT => time + 3600



selects entries that will be valid in one hour



  VALID_AT => [ time, time + 3600 ]



selects entries that are valid now and also one hour from now.

Validity specification for joined tables

If multiple queries are linked using the join syntax, the VALID_AT named parameter must be an array reference very similar to the JOIN specification. The number of array elements must match the number of joined tables. Each individual entry of the arrayref specifies the validity for the corresponding table, just as in JOIN. For tables that do not have a NOTBEFORE/NOTAFTER date, the array element must be undef. Tables that have a validity may have a validity specification just as explained in the previous section for single table queries.

Example:



  $result = $dbi->select(
    #          first table    second table
    TABLE => [ CERTIFICATE, CERTIFICATE_ATTRIBUTES ],

    # return these columns
    COLUMNS => [ CERTIFICATE.SUBJECT ],

    JOIN => [
    #  on first table second table
    [ IDENTIFIER, IDENTIFIER ],
    ],
    #             first table            second table (no notbefore -> undef)
    VALID_AT => [ [ time, time + 3600 ], undef ],
    DYNAMIC => {
    CERTIFICATE_ATTRIBUTES.ATTRIBUTE_KEY => { VALUE => somekey-5},
    },
    );



Aggregate statements

It is possible to include aggregate statements in the query by using a hash reference for the column specification instead of a scalar. In this case the hash key ’COLUMN’ must be set to the desired column name.

The key ’AGGREGATE’ indicates that an aggregate function should be used on the column. In this case the value must be one of ’MIN’, ’MAX’, ’COUNT’, ’UNIQUE’ or ’AVG’. UNIQUE is translated to count(distinct <*>column>).

Aggregate example 1



  $result = $dbi->select(
    #          first table second table
    TABLE => [ WORKFLOW, WORKFLOW_CONTEXT ],

    # return these columns
    COLUMNS => [
    {
        COLUMN   => WORKFLOW_CONTEXT.WORKFLOW_CONTEXT_KEY,
        AGGREGATE => MAX,
    },
    WORKFLOW.WORKFLOW_SERIAL,
    ],
    JOIN => [
    #  on first table     second table
    [ WORKFLOW_SERIAL, WORKFLOW_SERIAL ],
    ],
    DYNAMIC => {
    WORKFLOW.WORKFLOW_SERIAL => { VALUE => 10004},
    },
    );



results in the following query:



 SELECT
    MAX(workflow_context.workflow_context_key),
    workflow.workflow_id
 FROM workflow, workflow_context
 WHERE workflow.workflow_id=workflow_context.workflow_id
   AND workflow_context.workflow_id=?
 ORDER BY workflow_context.workflow_context_key,
   workflow.workflow_id



Aggregate example 2



  $result = $dbi->select(
    #          first table second table
    TABLE => [ WORKFLOW, WORKFLOW_CONTEXT ],

    # return these columns
    COLUMNS => [
    {
        COLUMN   => WORKFLOW_CONTEXT.WORKFLOW_CONTEXT_KEY,
        DISTINCT => 1,
    },
    WORKFLOW.WORKFLOW_SERIAL,
    ],
    JOIN => [
    #  on first table     second table
    [ WORKFLOW_SERIAL, WORKFLOW_SERIAL ],
    ],
    DYNAMIC => {
    WORKFLOW.WORKFLOW_SERIAL => { VALUE => 10004},
    },
    );



results in the query



 SELECT
    DISTINCT workflow_context.workflow_context_key
    workflow.workflow_id
 FROM workflow, workflow_context
 WHERE workflow.workflow_id=workflow_context.workflow_id
   AND workflow_context.workflow_id=?
 ORDER BY workflow_context.workflow_context_key,
   workflow.workflow_id



Distinct results

If you want the results to be distinct, you can specify a global DISTINCT key with a true value. This is particularly interesting when used with joins.

See also

OpenXPKI::Server::DBI::DBH and OpenXPKI::Server::DBI::Schema
Search for    or go to Top of page |  Section 3 |  Main Index


perl v5.20.3 OPENXPKI::SERVER::DBI::SQL (3) 2016-04-03

Powered by GSP Visit the GSP FreeBSD Man Page Interface.
Output converted with manServer 1.07.