tcl.brk-level Verb: Access/TCL, logical.expressions.in.acodes Article/Article, tcl.resize Verb: Access/TCL, spreadsheet.article Article/Article, tcl.dialer Verb: Access/TCL, basic.onerr Definition/BASIC Program, tcl.bformat Verb: Access/TCL, general.dialer Definition/General, tcl.level.pushing Definition/TCL, tcl.network-setup Verb: Access/TCL, perf Definition/General, tcl.ap.unix Verb: TCL2/Unix, tcl.cvtcpy Verb: Access/TCL, basic.print.on Statement/BASIC Program, filename.devs Definition/Access: General, tcl.block-print Verb: Access/TCL, basic.common Statement/BASIC Program, tcl.item Verb: Access/TCL, referential.integrity.b-tree Article/Article, boot.error Definition/Unix, general.unix.q.ptr Definition/General, tcl.stack.definition Definition/TCL, UE.61A2 User Exit/PROC, attribute.defining.item.article Article/Article, basic.%kill C Function/BASIC Program, tcl.set-imap Verb: Access/TCL, general.hot.backup Definition/General, pxp.intro Definition/System Architecture, access.selection.processor Definition/General, sdb Definition/Unix, Pickto.ap.3 Article/Article, pib.status Definition/General, up.cut.paste Definition/Update Processor, tcl.set-iomap Verb: Access/TCL, tcl.t-att Verb: Access/Tape Commands, access.ss Modifier/Access: Verbs, tcl.set-ovf-local Verb: Access/TCL, tcl Introductory/TCL, basic.fold Function/BASIC Program, tcl.shp-status Verb: Access/TCL, connectivity.to.unix.in.ap Article/Article, tcl.brk-debug Verb: Access/TCL, tcl.reblock-ovf Verb: Access/TCL, tcl.basic-prot Verb: Access/TCL, tcl.buffers.g Verb: Access/TCL, tcl.logto Verb: Access/TCL, general.header.q.ptr Definition/General, runoff.intro Introductory/Runoff: Commands, basic.call Statement/BASIC Program, tcl.tape-socket Verb: Access/TCL, compile.time.date.stamp.rp Definition/BASIC Program, sib Definition/PROC

tcl.brk-level

Command tcl.brk-level Verb: Access/TCL
Applicable release versions: AP
Category TCL (746)
Description causes the <break> key to push a level on subsequent uses.

Note: on some systems, when the <break> key is set to push a level, it is not possible to push a level while in the debugger. To push a level while in the debugger, enter a colon (:) followed by a <return> or <enter>.

It is not possible to push a level while at the TCL prompt. At least one character must be entered.
Syntax
Options
Example
Purpose
Related tcl.esc-data
tcl.debug
tcl.esc-level
tcl.brk-debug
basic.debug
tcl.break-key-on
tcl.break-key-off
tcl.break-key
levels
tcl.level.pushing
system.debugger.end
system.debugger.:
ue.218d

logical.expressions.in.acodes

Command logical.expressions.in.acodes Article/Article
Applicable release versions:
Category Article (24)
Description constructing logical expressions in "A" processing codes.

contributed by Malcolm Bull. Original article ran in PickWorld Magazine.

In terms of value for money, the A processing code is unrivaled on R83, and is beaten only by the Advanced Pick CALL processing code, as discussed elsewhere in this issue. The A code can perform the actions of a great many other codes:

A1:" ":2 is equivalent to C1;" ";2

A2["1","3"] is equivalent to T1,3

A(1+2)/(3-4) is equivalent to, and considerably easier to write than F;1;2;+;3;4;-;/

and together with the additional fact that the A code can itself directly apply other conversion codes:

A(2*3/"100")(MD2) A3(G1*1)(MTS) A4(G3 1)(MCT)

give it tremendous power.

In this article, I want to look particularly at the logical capabilities of the A code. This form of the A code allows logical comparisons or tests to be made, and a value of 1 (=true) or 0 (=false) will be returned according to the result of that comparison. For example, the code:

A4=5

would return a value of 1 (true) if attribute 4 of a data item were equal to attribute 5, otherwise a value of 0 (false) would be returned. Another example:

AN(QTY)>"0"

would return a value of 1 if the QTY is greater than 0, otherwise a value of 0 will be returned.

In the logical A code, the operators are:

# not equal to
< less than
<= less than or equal to
= equal to
> greater than
>= greater than or equal to

The logical A code can be used as shown here to output a value of 1 or 0, but there are ways of exploiting this further. One possible application might be in a situation where it is required to display today's date if attribute 7, say, were blank, otherwise to display the data held in attribute 8. This might be achieved with a processing code such as:

A(7="")*D+(7#"")*8

If the result were to be further converted to external Date format, this code could be modified to:

A((7="")*D+(7#"")*8))(D)

This will only work successfully for numeric data. For general data, we might use the form:
A("HIGH"["1",(2>5)*"99"]):("OK"["1",(2=5)*"99"]):("LOW"["1",(2<5)*"99"])

On implementations such as Reality and Ultimate (and AP. ed.), the AIF correlative offers alternative means of testing data values and taking appropriate action. For example, the correlative:

AIF 2<5 THEN "LOW" ELSE "OK"

will compare the contents of attribute 2 with those of attribute 5 and output either the word LOW or the word OK according to the result of the comparison. We might need this in a situation where attribute 2 contains the actual stock level of our products, and attribute 5 contains the minimum permissible stock level. In a more complex case, we might use the AIF correlative:

AIF 2=5 THEN "OK" ELSE IF 2<5 THEN "LOW" ELSE "HIGH"

which will further output one of the words HIGH, LOW or OK according to the relative sizes in attributes 2 and 5. On R83 implementations which do not offer AIF, the user must produce these results by exploiting the simple logical operations shown above.

The reports in Figure 1 illustrate how we might perform the logical test:

IF 2 < 5 THEN print 'LOW' ELSE print 'OK'

by means of the A code:

A"OK LOW"[(2<5)*"3"+"1","3"]

returning either of the words OK or LOW, according to the values of attributes 2 and 5. It is worth taking a closer look at the action of the code:

(1) We start with a literal "OK LOW" holding the pieces of text which we wish to display as substrings. Pay particular attention to the fact that this is made up of two substrings "OK " and "LOW" both 3 characters in length.

(2) We first compare attribute 2 with attribute 5:

2<5

returning a value of 1 if attribute 2 is less than attribute 5, or 0 otherwise (if 2 is greater than or equal to 5).

(3) We then multiply the result (1 or 0) by 3 (the length of each substring) to produce a result of either 3 or 0, and then we add 1:

(2<5)*"3"+"1"

to find the starting position of the required substring within the literal string, to produce 4 (if the condition is true) or 1 (if the condition is false).

(4) Finally, starting at this calculated position (1 or 4), we extract 3 characters from the literal string.

"OK LOW"[(2<5)*"3"+"1","3"]

to return either OK (if attribute 2 is greater than or equal to attribute 5) or LOW (if attribute 2 is lower than attribute 5).

The same result could have been produced by longer, but possibly more intelligible form:

A("LOW"["1",(2<5)*"99"]):("OK"["1",(2>=5)*"99"])

If, say, attribute 2 (the actual stock level) were multi-valued and attribute 5 (the minimum stock level) were just a single value, then we should use the form:

2<5R

in each case in order to re-use the single value in attribute 5 with each of attribute 2's multi-values.

Figure 1

LIST STOCK DESCRIPTION QTY MINIMUM LEVEL2

STOCK.. Description.............. QTY MINIMUM... LEVEL

1000 DESK, GREEN-BLUE, ASH 8 30 LOW
2000 SETTEE, YELLOW, OAK 18 30 LOW
3200 SIDEBOARD, NEUTRAL, ASH 15 15 OK
4200 SETTEE, GOLD, ASH 55 30 OK
8763 SIDEBOARD, MAROON, ASH 10 15 LOW
9000 SIDEBOARD, YELLOW, ASH 99 15 OK

SORT STOCK BY-EXP LEVEL2 "LOW" DESCRIPTION QTY MINIMUM LEVEL2

STOCK.. Description.............. QTY MINIMUM... LEVEL

1000 DESK, GREEN-BLUE, ASH 8 30 LOW
2000 SETTEE, YELLOW, OAK 18 30 LOW
4200 SETTEE, GOLD, ASH 10 30 LOW
8763 SIDEBOARD, MAROON, ASH 10 15 LOW
8763 SIDEBOARD, MAROON, ASH 5 15 LOW
8763 SIDEBOARD, MAROON, ASH 0 15 LOW

I use a further - albeit somewhat arcane - application of this same technique to produce reports on a file called ACC.SAVES which records the details of my account-save diskettes. Each item on the file is the name of an account which is (or has been) on my system. A report on the ACC.SAVES file shows the name of the account, the rotation number of the latest account-save diskette, the date when the diskette was produced and an indication of whether or not the account is still on the system. It is this last piece of data, produced by the definition ON/OFF which employs the code:

A"OFFON"[((0(TSYSTEM;X;;1)(T1,1)="D")*"3"+"1"),"3"]

which is illustrated in Figure 2. I leave you to work out exactly how the code checks the entry on the SYSTEM file to see whether or not there is a D-pointer, indicating whether or not each particular account is currently on the system.

Figure 2.

SORT ACC.SAVES PRODUCED LATEST ON/OFF

ACC.SAVES........ Produced............. Latest ON/
disk OFF

ACCESS.TEXT 19:11:53 02 NOV 1992 2 ON
DBASE.TEXT 19:22:51 02 NOV 1992 1 ON
DRILLS 16:48:54 18 JUN 1992 1 OFF
MALCOLM 19:02:56 02 NOV 1992 1 ON
PROGRAMS 19:32:59 02 NOV 1992 1 ON
SB+.DEFN 19:08:26 02 NOV 1992 1 ON
SYSPROG 19:11:53 02 NOV 1992 1 ON
SYS.DEV.TEXT 18:09:17 13 JUL 1992 1 OFF

In such cases, we could pick up a simpler substring, as with:

A"XY"[(2<5)*"1"+"1","1"]

and then use the result (X or Y) to return either of two messages held with item-ids X and Y on a text file:

A("XY"[(2<5)*"1"+"1","1"])(TMESSAGES;X;;1)

This is particularly convenient when if the text of the messages is likely to change.

We can extend the previous reasoning and use an alternative technique to return one of three possible values.

Imagine a situation in which we need to carry of the following tests:

If 2 < 5, then output "LOW"

If 2 = 5, then output "OK"

If 2 > 5, then output "HIGH"

In the solution shown in Figure 3, I have established three definitions, called LOW, OK and HIGH which use processing similar to that of the previous example to return null or a string according to the relative values of attributes 2 and 5. A third definition, LEVEL3, then concatenates the results of the other three definitions. Since two of these will always return null, only the required string will be output.

LOW OK
001 A A
002 2 2
003 LOW OK
004
005
006
007
008 A"LOW"["1",(2<5)*"3"] A"OK"["1",(2=5)*"2"]
009 L L
010 1 1

HIGH LEVEL3
001 A A
002 2 2
003 HIGH LEVEL
004
005
006
007
008 A"HIGH"["1",(2>5)*"4"] AN(LOW):N(OK):N(HIGH)
009 L L
010 1 1

Figure 3

LIST STOCK DESCRIPTION QTY MIN LOW OK HIGH LEVEL3

STOCK. Description............. Qty Min Low OK High Level

1000 DESK, GREEN-BLUE, ASH 8 30 LOW LOW
2000 SETTEE, YELLOW, OAK 18 30 LOW LOW
3200 SIDEBOARD, NEUTRAL, ASH 15 15 OK OK
4200 SETTEE, GOLD, ASH 55 30 HIGH HIGH
8763 SIDEBOARD, MAROON, ASH 10 15 LOW LOW
9000 SIDEBOARD, YELLOW, ASH 99 15 HIGH HIGH

The same result could have been produced by the longer, but possibly clearer form:

A("HIGH"["1",(2>5)*"4"]): ("OK"["1",(2=5)*"2"]): ("LOW"["1",(2<5)*"3"])

or even:

A("STOCK LEVEL HIGH"["1",(2>5)*"99"]): ("STOCK LEVEL AT EVENS POSITION"["1",(2=5)*"99"]): ("!! STOCK LEVEL BELOW MINIMUM !!"["1",(2<5)*"99"])

When any condition is false (and returns a value of 0), we take a substring of length 0 (that is, a null value); when a condition is true (and returns a value of 1), we take a substring of length other than 0. These three substrings are then concatenated to produce the output result.

Like all processing codes, these must, of course, all be written on one attribute of the definition.

There may be a situation where output data is held on any of several possible files. For example, we may have a file of invoic es of which the customer code may identify a USA customer on the USA.CUST file or a British customer on the UK.CUST file, or a customer on the OSEAS.CUST file. We can pick up the customer name from the appropriate file by any of the codes:

A((11(TUSA.CUST;C;;1))(TUK.CUST;C;;1))(TOSEAS.CUST;C;;1) or: TUSA.CUST;C;;1]TUK.CUST;C;;1]TOSEAS.CUST;C;;1 or: F;11;(TUSA.CUST;C;;1);(TUK.CUST;C;;1);(TOSEAS.CUST;C;;1)

Each of these codes will pick up the name in attribute 2 of the USA.CUST file; if there is no such item on USA.CUST then it will go on to pick up the name from UK.CUST; if there is no such item on UK.CUST then we use OSEAS.CUST, and so on.

The action of these codes is fairly self-explanatory: we first translate the original code against USA.CUST and return either the customer's name (if it is on the USA.CUST file) or the original code (if it is not on the USA.FILE, since we use the C subcode in the Tfile code). This result is then used as input data to trans late against UK.CUST; this will return either the input data (the original code or the name from USA.CUST) or the translated name from UK.CUST, and so on. This process can be repeated indefinitely for a number of such files.

The solution assumes that the data returned from any of the files (a customer name, in this particular situation) is not a valid key to any of the later files in the sequence.

Since the A code does not provide a facility for the logical AND and OR operators, this must be achieved by some other means. A little mathematical thought will reveal how we might transform the two values 1 and 1 into 1 (to simulate AND) and the two values 1 and 0 into 1 (to simulate OR). For example, we may use an element such as:

(2<3)*(4>5)

to simulate the condition when attribute 2 is less than attribute 3 AND attribute 4 is greater than attribute 5. This will return a value 1 only when both conditions obtain, otherwise it will return a value of 0. This value can then be used in the manner described above. The element:

((2<3)+(4>5))>"0"

can be used to simulate the condition when attribute 2 is less than attribute 3 OR attribute 4 is greater than attribute 5. This will return a value 1 when either condition is true, otherwise it will return a value of 0.

Constructions of this form are illustrated in Figure 4. The defi nition AND.TEST which uses a A code such as:

A(2>"90")*(5="15")

returns a value of 1 or 0, according to whether or not attribute 2 is greater than 90 and attribute 5 is equal to 15. The definition OR.TEST which uses an A code such as:

A((2>"90")+(5="15"))>"0"

returns a value of 1 or 0, according to whether or not attribute 2 is greater than 90 or attribute 5 is equal to 15.

Figure 4

SORT STOCK *A2 *A5 AND.TEST OR.TEST

STOCK..... *A2....... *A5....... AND OR
TEST TEST

1000 8 30 0 0
2000 18 30 0 0
3200 15 15 0 1
4200 55 30 0 0
8763 10 15 0 1
9000 99 15 1 1

We could then apply these forms in any of the contexts which we introduced earlier. For example:

A("LEVEL < 0"["1",(2<"0")*"99"]): ("LEVEL IN RANGE 0-10"["1",(2>="0")*(2<="10")*"99"]): ("LEVEL IN RANGE 10-20"["1",(2>="10")*(2<="20")*"99"]): ("LEVEL IN RANGE 20-30"["1",(2>="20")*(2<="30")*"99"]): ("LEVEL > 30"["1",(2>="30")*"99"])

I hope that the variety of techniques - and especially the alternative methods - which I have presented in this article might inspire you to pursue A codes and to get Access to do more and more work for you and your organization. Access is always capable of doing just that bit more than you think.
Syntax
Options
Example
Purpose
Related

tcl.resize

Command tcl.resize Verb: Access/TCL
Applicable release versions: AP 6.1
Category TCL (746)
Description resizes a file to the desired modulo.

The "resize" command increases or decreases the apparent contiguous portion (or modulo) of the specified file without requiring a file-restore. It either adds or releases the amount of overflow necessary to reach the new modulo, and re-hashes all of the items. The re-hashing is done by groups and the current relative group being re-hashed is displayed on the screen.

Unlike previously available resizing utilities, the Advanced Pick "resize" command allows a file to be read or modified while the items are being re-hashed. Furthermore, the resizing process does not require a completely new block of overflow with a size equal to the new modulo. Instead, it allocates a new file "segment" which is only as big as the difference between the old and new modulos and maps this segment onto the existing file. The file then appears to have one contiguous block available even though it may really be several blocks internally.

Once a resizing command has begun re-hashing a file, the command can be logged-off or interrupted without problems. The resizing process can then be re-started on the same line or on a phantom.
If the modulo is not specified, then the "resize" command will resize the file according to the "reallocation" attribute in the file's D-pointer.

To see all resizing processes currently active, use the "list-resizing" command.

To temporarily stop all resizing processes on the system, use the "kill-resizing" command.

To restart all resizing processes (on phantoms), use the "check-resizing" command. This command is executed at coldstart time to restart any resizing commands which were interrupted by a shutdown.

After resizing a file, the D-pointer of that file will display a modulo equal to the new modulo. The "reallocation" attribute is also changed to the new modulo. The internal base and modulo of each of the file's segments are displayed in the "segment-base" and "segment-mod" attributes. These attributes cannot be modified.

Note that it is currently not possible to resize below the original modulo.
Syntax resize file.reference {modulo} {(options}
Options a Allocate new file space only. If this option is used, then the new segment is added, but no items are re-hashed. This is useful for allocating the new file space in the foreground, and then starting another resizing process as a phantom to complete the re-hashing process.

s Suppress output of the relative group counter during re-hashing.

u Unconditional resizing. Normally, resizing processes will pause temporarily when some other process is accessing the file in a sequential fashion (like the save, or an Access or Pick/BASIC select). This is because items will be in motion during the resizing and sequential processes may find an item twice. The "u" option disables this behavior so that resizing will proceed irrespective of any sequential processes. Also, the "u" option will release extra unused filespace at the end of a resize down irrespective of how many users have that file open.

w{n} Wait after every group. If no numeric parameter is specified, then this option will cause the resizing process to wait for approximately 100ms between each group that it re-hashes. If an optional numeric parameter is specified, then the process will sleep for n seconds between each group. This option minimizes the impact on overall system performance during the resizing process and is strongly recommended.

z Rehash a small number of groups only. This is used by the check-resizing and kill-resizing commands after a resizing process has been terminated unexpectedly. Under these conditions, the resize command will process enough groups to assure that no duplicate items occur in groups which were previously being re-hashed.
Example
Assume that a file called "mydata" exists with a modulo of 7, and 
that the "istat" command indicates a suggested modulo of 13.

resize-file mydata 13
Allocating 6 additional frames for primary file space.
Rehashing 7 group(s).
7

[188] Resizing complete.

The file now exists with a modulo 13. Now, suppose that a large amount of data 
is removed from this file, and "istat" now reports that the file 
should be a modulo of 11. The following command will shrink the file:

resize-file mydata 11
Rehashing 13 group(s).
13
Releasing 2 frames back to overflow.

[188] Resizing complete.
Purpose
Related tcl.list-resizing
tcl.resize-file
access.istat
modulo.def
d-pointer
tcl.check-resizing
tcl.kill-resizing
filename.mds
filename.file-of-files
filename.resizing
tcl.f-resize

spreadsheet.article

Command spreadsheet.article Article/Article
Applicable release versions:
Category Article (24)
Description discusses using the spreadsheet connective.

contributed by Chris Alvarez. Original article ran in PickWorld Magazine.

One great advantage to spreadsheet programs is the ability to represent reporting data in columns and rows. While the Access query language provided with Advanced Pick does not have spreadsheet functionality, the new 'ss' connective gives the user the ability to create reports similar to popular spreadsheet programs.

The Spreadsheet connective gives Access the ability to build reports with column and row headings based upon date 'buckets'. For example, if the database contains a sales history file, it is possible to build a report with sales totals for each month of the year across the column headings and one line per dealer on the row heading. Without the spreadsheet connective, this report would only be possible by creating 12 separate dictionaries for each of the months.

With the 'ss' connective, this can be accomplished with any current date dictionary attribute.

In order to produce the above sales report, it is first necessary to construct a date attribute with an output conversion for our bucket headings. For example, the database contains a sales history file called 'SALES' which contains the order date in attribute 4. The following attribute called 'MONTH' will serve as our bucket headings:


MONTH
001 A
002 4
003
004
005
006
007 A4(DMA)["1","3"]
008
009 L
010 8


The a-correlative on the output conversion (line 7) will convert the internal date found in attribute 4 to the alphabetic month name and then extract the first 3 characters. This will serve as the headings to the columns for our report.

The next step is to build a dictionary to display the total dolla mount for each order. This attribute will be the value 'spreadsheet' across the column buckets. The 'SALES' file holds the total for each order in attribute 20:


ORDER.TOTAL
001 A
002 20
003
004
005
006
007 MR02,
008
009 R
010 10


The final dictionary required for our example is the attribute to display the dealer name. The 'SALES' file holds the customer number in attribute one and the following dictionary uses that number to translate to the customer file to pick up the customer name held in attribute 1 of the customer file:


DEALER
001 A
002 1
003
004
005
006
007
008 TCUSTOMER;X;;1
009 L
010 30


The syntax of the spreadsheet connective is:

SS beg.date end.date attribute

where the 'ss' is the connective, the beg.date and end.date is the beginning and ending date range for this report in external format and delimited with quotes, and the attribute is the dictionary attribute to be spreadsheet across the columns. Using the 3 attributes from above, the following Access statement would produce the sales report for our example:


SORT SALES BY DEALER BREAK-ON DEALER SS "01/01/92" "12/31/92" ORDER.TOTAL DET-SUPP


The column width used for each of the monthly columns is the current column width for the ORDER.TOTAL attribute, in this case 10. Also, please keep in mind that the date conversion in the date attribute, in this case MONTH, must always be on the output conversion for the spreadsheet connective to work properly.

When experimenting with this example, another new connective exists to easily limit the amount of data used for the sort. Simply add 'SAMPLING 10' to the end of the above statement and the system will find 10 items that match the specified selection criteria and print the report using only those 10 items.
Syntax
Options
Example
Purpose
Related access.ss

tcl.dialer

Command tcl.dialer Verb: Access/TCL
Applicable release versions: AP 6.1
Category TCL (746)
Description controls the dialer subsystem which allows transferring data to remote systems over the phone in a batch mode. The section 'dialer, General' explains the principle of the dialer subsystem and the main notions used here.


Using the menus :

Without any argument, a menu is displayed. The following are the valid keys. When indicated, arrow keys can also be used:
RETURN Executes the highlighted option.

'Q' Or ESC. Exit menu and go back to previous menu.

'X' Exit menu and go back to TCL.

n Number from 0 to 9. Select the option number 'n'. '0' is option 10.

letter Select the next menu option starting with the specified lumn. letter (except 'x' or 'q').

Ctrl-N Or down arrow. Move cursor down.

Ctrl-B Or up arrow. Move cursor up.

Ctrl-K Or right arrow. Move cursor right to next column.

Ctrl-J Or left arrow. Move cursor left to the previous column.

Ctrl-X Cancel, in a input field.


Displays :

The screen is divided in an upper section for menu display and a lower section for messages, help screen and user prompts.


Installation :

The first time the system is installed, the user is required to enter a local system name and the time zone the system is located into:
Setup Local Host Name
Local host name :

Confirm (y/n/q) :
Enter any name, up to 8 characters and hit return. This name is also shown by the TCL command 'node'.
Setup Time Zone
1 UK
2 Azores
...
Confirm (y/n/q) :
Select the appropriate time zone and hit <RETURN>.

The remote sites and serial devices should then be set using the 'Setup Menu' described later.


Main Menu :

The main menu operation is used for normal operations. The options are:
1 Queue status
Display the status of the queues to all systems (system name, number of queued jobs and description of the next job) or of a specific system (detailed description of each job).

2 Device status
Display the last messages logged by a specified IO daemon. The messages are displayed most recent first.

3 Start dialers
Start the dialer IO daemons. This command can also be executed from TCL to be used in macros or PROCs. One IO daemon is started for each defined device.

4 Stop dialers
Stop the dialers IO daemons. This command can also be executed from TCL to be used in macros or PROCs. When a IO daemon is in the middle of a call, it will not respond to a stop request.

5 Setup
System setup and test. See the section 'Setup Menu' below.

6 Check conflicts
Display any pending conflicts between changes made to the local data base and changes received from remote systems. See the section 'Resolving conflicts' below.

7 List permanent log
The permanent log contains all messages logged by any IO daemon. This log is not cleared automatically. The messages are displayed the most recent first. Use the functions keys shown in the title to navigate though the log. Hitting a colon (':') displays a command menu which allows searching for special text and start the display at a specified time and date.

8 Messages
Displays the messages sent by other systems, using the command 'msg' of the dialer TCL verb (see the section 'TCL Interface' below), or by the dialer subsystem itself. The list of all the pending messages, sorted by user and by time/date is displayed, with the destination user and the subject. The help section shows more information about each message. When selected, the first few lines of the messages are displayed and the user can P)rint it to the currently assigned form queue, D)elete it, or Q)uit to leave the message. The dialer subsystem generates 'Notice mail' messages in case of serious problems. The messages can also be read from TCL with the 'mail' command. See the section 'TCL Interface' below.


Setup Menu :

The setup menu allows defining the remote systems and devices used by the dialer subsystem, and perform accessory functions, like testing the communication, purging the queue, etc...

1 Local Host name
Define the name of the local system. The local system name must also be declared in all the other systems which can accept messages from this local system. This mechanism is required for security reason, to ensure that messages are properly authenticated.

2 Remote system
Define a remote system. Remote systems must be defined so that they can be called and to be able to accept messages from them. A submenu displays the list of the currently defined systems, or 'New System' to create a new entry. The following elements must be provided:

System name
Any string of up to 8 characters. If the system has been defined already, changing the name creates another system (it does not rename the specified system). This can be used to duplicate a system definition.

Calling Schedule
Define for each day of the week, if and when the system can be called. The System Administrator should choose a window time to get the cheapest rates, when applicable, and, when many systems are calling one central site, try to spread the calling window to reduce the risk of collisions. Within an allowed schedule, the call is placed within a 'few minutes' of the specified time. In case of problems, the system will retry no more than three times to establish the communication. The syntax for the schedule is as follows:
never
Do not call the system that day. If all days are set to 'never', the system will never be called. However, data CAN be transmitted to this system, but only when IT calls in.
any (or empty string)
Call at any time, as soon as data is queued for transmission. The actual transmission time will be within the next minute. This should be reserved to leased lines only, since it would be very inefficient to dial for each update.
HHMM-HHMM
The first four digits specify the starting time of the calling window, in 24 Hr format. The second set specifies the ending time. For example '2000-0100' means from 8:00 p.m. to 1:00 a.m.; '0100-0530' means from 1:00 a.m. to 5:30 a.m.
*HHMM
Define a periodic calling schedule. A call can be placed every HH:MM. For example, '*0100' means that a call will take place at 1:00 a.m., 2:00 a.m., etc.. during the whole day. It is not advised to use too small a period. One hour should be a minimum, except in some very exceptional cases.

Phone number
Up to 4 phone numbers can be specified. In case of failure, they are tried in the given order. The syntax of the phone number must be compatible with the modem and, possibly, should include any prefix, wait, extension, etc...

Device
Specifies which devices can be used to establish communication. If none is specified, the system will select the first available device. See the section below about creating serial devices on Unix implementations. The syntax of the device number is:
S nnn
Dedicated serial device number 'nnn'. The system assumes it has exclusive access to this device.
nnn
Use the serial device normally associated to the Pick process number 'nnn'. When the serial device is needed, the Pick process will be 'unlinked' to take control of the device, and linked back on to it when the transfer is complete.

Permissions
Defines the operations the specified system can or cannot do when calling IN. Enter 'y' or 'n' to each field:
Call
Defines if the system is allowed to call in at all. If set to 'n', the local host will hang up if receiving a call from this remote site.
Exec
Defines whether the remote system is allowed to execute commands on the system. Commands are run by the IO daemon itself and should be short.
Upd
Defines whether the remote system is allowed to update the local system's data base.
Adm
Defines whether the remote system is allowed to perform remote maintenance operations.

3 Devices
Define a serial device. A submenu displays the list of currently defined devices or 'New device' to create a new entry. The following elements must be provided.
Device id
Device name. For a dedicated device, the id must be prefixed by a 's'. For a shared device, enter the Pick process number which is normally linked to the device to use. Do not use device 0. If shared, also specify the take over time below.
State
Enable or Disable. If disabled, no IO daemon is started and this device will not be used.
Type
Direct or Modem. A direct device is assumed to be constantly connected to the target system through a leased line, for example. Only one system can be reached through this device. Make sure the system definition is consistent with this setting.
Dialer name
Defines the name of the dialer if the device is a 'modem' type. By default, the dialer is 'hayes'. This name is the item-id of the BASIC subroutine in the 'dm,dialers' file which handles the dialog with the modem.
Mode
Defines whether the device is Input only (it can only receive calls), Output only (it does not accept any incoming calls), or both.
Settings
Defines the serial port setting. Note the device MUST support 8 bit characters.
Take over time
Defines the time window during which the dialer will take over a serial device normally shared with a regular Pick process (i.e., a device which does NOT start with an 's'). The syntax of the time is identical to the one described for the calling schedule on a remote system. If a time interval (eg '2200-0700') is specified, the device will keep the device attached for the specified duration. This should be the normal setting, for example to specify a time out of normal opening hours when nobody would dial on the system to logon to Pick. If a periodic schedule (eg '*0030') is specified, the device will attach to the device for a period, then detach for the same period, then re-attach, for a new period, etc... Specifying 'any' or leaving this field blank, effectively makes it dedicated since the IO daemon will attach permanently to the device.
IMPORTANT: Changing the take over time schedule requires to stop and restart the dialer subsystem for the change to become effective.
User reset
Optional command string to send to the modem to reset it to a user defined state when the dialer daemon either terminates or relinquishes the shared device. A carriage return is appended to the string. If left empty, a modem dependent string is used ("ATZ" in case of the Hayes dialer). If this command fails (eg does not respond 'OK'), a factory reset is done.

4 Delete remote system
Delete the definition of a remote system.

5 Delete device
Delete the definition of a device.

6 Test access to system
Attempts to establish a call to the specified system. This option can be used to test a new installation, or to make sure a remote host is reachable. This option dials out, establishes the communication and sends 10 test messages to the remote system which must be powered up and have an appropriate IO daemon started. The IO daemons MUST not be started on the local system else 'no device available' message will be issued.

7 Set local time zone
Defines the time zone the local system is in.

8 Set system map
Defines where the updates to a system are to be sent. This option shows a sorted list of the accounts on the system (up to 256 accounts can be shown). Select the account to be set up, or 'ALL ACCOUNTS'. Note the 'dm' account is never selected. Then select to update either 'All files' or 'Select files or a select list of files. When the 'Select Files' option is chosen, the list of files in the account is displayed; Hit the <space> bar to select the file(s) to be changed and hit <return> when all the files have been selected; a '*' is displayed in front of the selected files. The list is saved in a list which can be used as input to the 'Select list of files' option. The user can elect to post updates to both the data level(s) and the dictionary, to the dictionary only, or to the data level only. An option 'REMOVE CALLX' allows suppressing the posting of the updates on the entire selected account. Finally, the list or remote system(s) where the data is to be sent must be entered, as well as the optional account name on the remote system. If an account name is not specified, the remote account is assumed to be the same as the local account name. Up to four remote systems can be entered. An remote system cannot be specified twice in the list. Confirmation is required. This option updates the D pointers of the dictionaries and/or data levels of the selected account(s) and files to insert or remove, as appropriate, the CALLX correlative to perform the update posting.
This menu option can be used as long as there are less than 6 target system names. Otherwise, it is more convenient to use the TCL form (see the section 'TCL interface' below).

9 Purge queued updates
Purge the queue of data to be transmitted to either all systems or a select system.

10 Clear permanent log
Clear selected messages or all messages from the permanent log. A sub-menu allows selecting messages older than one week, one month, or all messages.

11 Connect to device
Connect directly to the device to send it some simple commands, like resetting the modem. The device must not be attached. Note that only simple commands can be sent to the device (like a reset). Dialing to a system may be difficult.
12 Submit remote command
Connects to a remote system and execute a command on it. The local site must have been declared with 'sysadmin' privilege n the remote system. This option prompts for the remote system to dial on and a command menu: 'Submit TCL command' to execute any TCL command on the remote; 'Terminate remote' to force the IO daemon on the remote system to terminate. The submit command prompts for a command, dials to the remote system, submits the command and disconnects without waiting for the end of the command. The terminate option should be used ONLY if the device on the remote system is a shared device. This allows a user to free the device to be able to do a remote logon.


Resolving Conflicts :

This option from the main menu examines the conflict file and shows any conflict still not resolved. The first menu selects all the conflicts and shows the account, file and item-id. The help message in the message section gives more information about the conflict. The 'Check conflict' option requires that the System Administrator has good knowledge of the data base structure. Only raw data is presented. A more advanced conflict resolution requires understanding the data base organization and cannot be accomplished by a general purpose tool. The following section 'Conflict Data Format' details the stucture of the data stored in case of a conflict so that an application dependent tool can be developed.
When a conflict is selected, the following information is displayed:
From: sys Date: 12/14/94-09:18 Cause: Conflict change
Acc : dev File: tmp Item : x
sys!dev,tmp, x LOCAL!dev,tmp, x
----------------------- -----------------------
In this example, 'sys' is the name of the remote system the conflicting changes is coming from, the time and date are the local time and date, the cause is a short description of the conflict, 'dev' is the local account, 'tmp' the file and 'x' the item. The left hand side of the display shows the remote file information and the right hand side the local one.
The screen is then divided into two columns and more information is shown. The message area on the screen explains the reason of the conflict and prompts the user for action. The various conflicts are:
Number of attributes has changed
This indicates that, originally, the items on the local and the remote systems had different sizes. The display of the two items is an attempt at showing the the data on both systems. It is likely to be strange...

Conflicting attribute
The original attribute on the local system is not what it was on the remote system. The display shows the results of both changes.

Conflicting values in attribute
At least one value in the original attribute on the local system is not what it was on the remote system. The display shows the results of both changes.

The user is then prompted to take an action:
Use data from system 'xxx ' ........ 1
Use local data and update remote ....... 2
Quit (leave conflict unresolved) ....... Q
Cancel conflict. Leave data alone ...... C
Select:
1 Use the data received from the remote system to overwrite the local data.
2 Keep the local data and copy the local data to the remote to force it to use the local copy. Note that the copy is just enqueued. The system will wait until the next calling time to actually copy the data. More changes can be made to the same item before the copy is performed. The item which will be transmitted will the item at the time the transfer takes place, not the item the way it is at the conflict resolution time.
Q Quit. Leave the conflict unresolved.
C Cancel the conflict. The data is left as is on each system, CREATING INCONSISTENCIES ON THE DATA BASES. This option should be used with extreme caution.


Conflict Data Format :

The conflicting data is stored in the file 'dialer.log,conflict'. Each conflict is represented by 3 items:
- Conflict definition item. The item-id is a unique time date stamp. It contains the following attributes (defined in 'dm,bp,includes dialer.inc'):
CFLCT$DATE : Local conflict date.
CFLCT$TIME : Local conflict time.
CFLCT$ACCOUNT : Account on LOCAL system.
CFLCT$DICT : 'dict' if a dict, or ''.
CFLCT$FILE : File name.
CFLCT$ITEMID : Item-id.
CFLCT$SYSTEM : Remote system name.
CFLCT$CODE : Reason for the conflict:
CFLCT$NOFILE : Missing file
CFLCT$BADDIFF : Bad difference string
>0 : Line# in the diff string (see below)
CFLCT$REMACC : Account on REMOTE (source) system.

- 'Old' item. The item-id is the concatenation of '*OLD*' and of the conflict id it depends on. This item contains the item coming from the local system at the time the conflict occurred.

- 'New' item. The item-id is the concatenation of '*NEW*' and of the conflict id it depends on. This item is not really the 'new' item: it contains the 'difference string' describing the changes applied to the remote host. The attribute CFLCT$CODE contains the attribute number in this 'new' item on which the conflict was detected.

The 'difference string' is composed of a series of commands which describes the change to be applied. Each command is composed of one or more attributes, and start by a one letter code. The valid commands are:

Ln Original number of attributes in the item 'n'.

Cn{]m} Change attribute command starting at attribute 'n'. Change 'm' (default 1) attributes. This command is followed by 'm' pairs of attributes 'oldvalue/newvalue'.

Vn{]m} Change value command starting at attribute 'n'. Change 'm' (default 1) attributes. This command is followed by 'm' triplets of multi-valued attributes built as follows:
valnum ] valnum ] ...
oldval ] oldval ] ...
newval ] newval ] ...
'valnum' is the value number which is modified. 'oldval' is the old value. 'newval' is the new value. If the number of old values is less than the number of new values, it indicates that the values were added. If the number of old values is greater than the number of new values, it indicates that the values were removed. In the example shown in the section 'dialer, General', the value change command would be:
V2
2]3
bb]bbb
bbb

An{]m} Add attribute command starting at attribute 'n'. Change 'm' (default 1) attributes. This command is followed by 'm' attributes which contain the added attributes. If the attribute 'n' is not empty, the new attributes are inserted before it.

Dn{]m} Delete attribute command starting (including) at attribute 'n'. Delete 'm' (default 1) attributes.

Merging the changes described by a difference string into an item is accomplished by the Pick/BASIC program in 'dm,bp,includes merge.sub'. For example:
* Get the old item
read m$olditem from conflict,'*OLD*':id

* Get the difference string
read m$diff from conflict,'*NEW*':id

* Merge the changes in
include dm,bp,includes merge.sub
if m$code=0 then
* OK
end else
* Error
end
Note the include is not a subroutine. It is a fragment of code which can be included in-line. See the include itself for a description of the interface.
Other includes of interest are 'dm,bp,includes sdiff.sub' which builds a difference string between two items and 'dm,bp,includes sdiff2.sub' which merges (combines) two difference strings into one, cumulating the changes.


Creating Serial Devices on AP/Unix :

The AP/Unix implementations have the possibility to create serial devices dynamically which are not linked to any Pick process. These devices are accessible only through Pick/BASIC GET and SEND statements. It is convenient to create devices this way by inserting in the user coldstart macro the "dev-make" statements required to create these devices (see the documentation 'dev-make, TCL'). For example:
dev-make -t serial -n 120 -a '/dev/tty9'
creates a device 's120' associated to the Unix device '/dev/tty9'.


TCL interface :

Some functions are accessible from TCL, to be inserted in macros:
start Start the IO daemons.

stop Stop the IO daemons. When a IO daemon is in the middle of a call, it will not respond to a stop request.

queue Display the queue(s). The (V) option shows detailed information about the queues instead of just the number of queue entries.

purge Purge a queue. The system must be specified by a clause system=name . NO confirmation is required.

status Display a device last message. The device must be specified by a clause device=name .

mail Read the messages. This option has the following arguments: user=user.name . If 'user=' is omitted, the current user name is used. If the (Q) option is used, only the number of messages is displayed. A prompt '*' is displayed. The commands are:
n : Show the message number 'n'
L : List the messages numbers and subject.
Pn : Print the message number 'n' on printer.
D[{n}|*] : Delete the message 'n', or the last message message displayed or printed, or all messages ('*') for the selected user.
Q : Return to TCL.

msg Send a message to a user on another system. The messages are stored in the dialer subsystem mail file, and can be examined with the option 7 on the main mail menu. This option has the following arguments: user=user.name text=message subject=subject.text . When there is a space in a field, the data must be enclosed in quotes. If 'user=' is omitted, the current user name is used. If 'subject=' is omitted, the user is prompted for an optional one line subject. If 'text=' is omitted, the user is prompted for several lines of text, terminated by one line containing only a period ('.'). If the text of the message is null, no message is queued.

sysmap Set the system map. The option account=accname defines the account to be changed. The option system=[*list|name{,name,...}] defines to which system(s) the updates must be transmitted. '*list' specifies that the list of system(s) is in the select list 'list'. The optional option remacc=accname{,accname,...} specifies the account name(s) on the remote system(s). If this last option is not specified, of if an account name in the list is missing, the account name on the remote system is identical to the local account name. There must be as many remote account name s specified as there are remote system names. The optional option files={-}{dict|data}[*|-|*list|name{,name,..}] specifies the files to be changed. If this last option is not specified, the file sin the account are not changed, only the system map is affected. The '-' sign specifies that the CALLX correlative must be removed from the specified files. If the key word 'dict' or 'data' is specified, then only this element of the file is affected. Note a space must be present, therefore the whole argument must be surrounded by quotes. The form 'files=*' specifies all files in the account. The form 'files=*list' specifies that the list of files is contained in the select list 'list'. The form 'files=name{,name...}' gives an explicit list of file names. See the examples below for practical examples.


Files

The file 'dialer.log' used by the dialer subsystem is created automatically in the 'dm' account. Its data levels are:
billboard
System wide file keeping track of the posted updates.
conflict
Contains the conflict information
devices
Valid devices list.
dialer.log
Status of the IO daemons.
log
Permanent log.
mail
File containing messages for the System Administrator, like error notice, acknowledgments, submit results, etc...
map
System map.
queue* system
Queue to the system system .
spool
Spooled data.
systems
Remote system definitions.
Syntax dialer {cmd} {device=name} {system=[*list|name{,name...}} {user=name} {subject=descr} {text=message} {account=accname} {files={-}[*|*list|name{,name,..}} {remacc=name,{name,,}} {(options}
Options Q Quiet. Suppress all messages. On the 'mail' command, this option just shows the number of messages, if any.

V Verbose. Display more information.
Example
Examples :
dialer start
  Start the dialer subsystem from TCL

dialer queue (v
  Display a detailed status of the queues

dialer status device=s120
  Display the last messages produced by the IO daemon asociated to the device 
's120'.

dialer stop (q
  Stop the dialer subsystem, suppressing all messages.

dialer purge system=seattle
  Purge the queue to the remote system 'seattle'.

dialer msg system=dev user=bob subject="Down time" text="We will 
shut down tomorrow at 12:00"
  Queue a message for the user 'bob' on the system 'dev'.

dialer sysmap account=bob system=dev,prod,back remacc=bob2,,bob3 files=*
Defines the system map for the account 'bob'. Updates will be 
transmitted to the systems 'dev', 'prod' and 
'back'. The remote accounts on these systems are respectively, 
'bob2', 'bob' (same as on the source machine) and 
'bob3'. All files, dict and data, are affected.

dialer sysmap account=bob files=-*
Defines the system map for the account 'bob'. The CALLX correlative 
is removed from all files. Note it is not necessary to specify the remote 
system when removing the CALLX correlative.

dialer sysmap account=bob system=dev files='data *'
Defines the system map for the account 'bob'. Updates will be 
transmitted to the system 'dev'. The remote account is the same as on 
the source machine. All files are affected. Only the data sections are changed, 
not the dictionary. Note the required quotes.

dialer sysmap account=bob system=dev files='data names, address,zip'
Defines the system map for the account 'bob'. Updates will be 
transmitted to the system 'dev'. The remote account is the same as on 
the source machine. Only the data sections of the files 'names', 
'address' and 'zip' are affected. Note the required quotes.

dialer sysmap account=bob system=dev files='dict *myfiles'
Defines the system map for the account 'bob'. Updates will be 
transmitted to the system 'dev'. The remote account is the same as on 
the source machine. Only the dict section of the files specified in the select 
list 'pointer-file myfiles' are affected.

dialer sysmap account=bob system=dev remacc=bob2
Defines the system map for the account 'bob'. Updates will be 
transmitted to the system 'dev'. The remote account is bob2. Only the 
system map is changed. No file is affected.
Purpose
Related tcl.dialer-copy
general.dialer

basic.onerr

Command basic.onerr Definition/BASIC Program
Applicable release versions: AP
Category BASIC Program (486)
Description identifies the statements to execute when an error occurs during commands which perform operations on peripheral storage devices.

The "system(0)" function contains the error message associated with the "onerr" condition.

The "onerr" condition is used in the same construct as the "then/else" construct.

Either an "onerr" clause or an "else" clause is allowed in a statement, but not both. They are similar, in that they are taken when the peripheral operation takes a decision-point or failure path. The "onerr" form provides a great deal more functionality, but at a higher cost. When the "onerr" clause exists, all media handling must be handled by the program. This means, for example, that the program is responsible for handling an "end of reel" condition, thus the program would have to prompt the operator for the next reel. By contrast, the "else" clause would handle the same situation by passing through the standard system routine to "mount next reel and press 'c' to continue".
Syntax onerr statement.block
Options
Example
readt tape.rec onerr
  crt "oops. we've got a system(0) error of " : system(0)
end

In this example, the "onerr" path is taken in the event of any 
abnormal condition, and it simply displays the value of system(0).

readt tape.rec onerr
    condition = system(0)
    begin case
    case condition = 1
         print "tape is NOT attached..."
    case condition = 5
         print "process end of reel"
    case condition = 6
         print "tape is write-protected"
    end case
end

This example illustrates how processing for an error could be divided out based 
upon the error which occured.
Purpose
Related basic.readt
basic.readtl
basic.readtx
basic.then
basic.then/else.construct
statement.block
tcl.t-att
basic.weof
basic.else
basic.writet
basic.rewind
basic.system

tcl.bformat

Command tcl.bformat Verb: Access/TCL
Applicable release versions: AP 6.1
Category TCL (746)
Description formats a FlashBASIC or Pick/BASIC source program and update the source file with the formatted item.

The action of bformat is identical to blist, except that the output is not printed, but is filed into the source file overwriting the original source program.

file-name is any Pick/BASIC source file.

item-list is a list of program names, or an asterisk (*) for all items, or null if there is a select list active.

The bformat verb is table driven, like the BLIST verb. The table is stored in the messages file. The item-id of the control table is "BF" followed by a 4 character hexadecimal number. The table number is contained in line 4 of the verb definition. The default table number is 0, thus the default table item-id is "BF0000". The structure of the table is identical with the BLIST control table. The only default options are R for renumber, C for comment indent inhibit, and the numeric options for specifying the starting statement label number and increment. It is possible to specify the numeric options in the control table, and leave the R option as a run-time option. In this case, the numeric options need not be specified at run-time.
Syntax bformat file.name {{item.list}|{*}} {(options}
Options r Renumber statement labels and all references to the labels (GOTO, GOSUB, RETURN TO).

n1-n2 Used with the R option, n1 specifies the new beginning statement label number, n2 specifies the increment between statement labels. Both n1 and n2 default to 10.

The R option is useful for renumbering a source program to make it easier to follow. If the R option is used, then a line containing only an exclamation point (!) and a statement label may be used to change the current new statement label number to the label specified on that line. This is true as long as the current statement label number is greater than what would be the next number (previous label number plus increment). This is useful when specific labels make the program easier to follow, such as at the beginning of subroutines, etc. Note: Certain forms of the GOTO, GOSUB, and RETURN TO statements will not be renumbered. This occurs when there is no blank between the key word and the destination label, e.g. GOTO20 will not be renumbered, but GOTO 20 will. Also, a mixture of numeric and alpha-numeric statement labels following an ON GOTO or ON GOSUB will fail to renumber any numeric labels past the first alpha-numeric label.
Example
Purpose
Related basic
tcl.blist
tcl.renumber
filename.messages

general.dialer

Command general.dialer Definition/General
Applicable release versions: AP 6.1
Category General (155)
Description defines a subsystem which allows Advanced Pick systems to communicate over serial lines to transfer items, execute commands on remote sites and to synchronize data bases by transferring updates. This section is a general introduction to the dialer subsystem. See the section "dialer, TCL" for a detailed description of the TCL command.


Overview :
The dialer subsystem is a set of processes running on each system, which communicate with each other at predetermined times set by the System Administrator, in a batch mode. The main functions are:
- Copying items from system to system(s) (See the section "dialer-copy, TCL".
- Submitting commands for remote execution on a system (See the section "dialer-submit, TCL".
- Synchronizing a data base across several machines. The data base is replicated on each system, and updates made on the various systems are propagated to the other systems. For example, if the attribute 2 of item A is modified on system M1, and the attribute 3 of the same item A is modified on the system M2, the dialer subsystem will transmit both updates to the machines so that they are applied to all systems.
The various systems on the network are identified on each system by a name and the phone number(s) where they can be reached.
All communication is protected by an authentication process which makes sure the remote system are allowed to call in, to submit commands and to make updates to the data base.

Definitions :
"Local System"
The local system represents the system where the user is currently connected to.

"Remote System"
A remote system is a system which is accessible from the local system. Note that all systems must be known to all other systems, for security reason. On large network, the maintenance of the list of systems can be cumbersome, so it might be advised to maintain the list of all systems on one central administration machine which has to be declared manually to all other systems, and use the dialer copy capability to copy the list to all other sites.

"Serial Device"
The System Administrator designates one or more serial devices to be used by the dialer subsystem. The devices can be dedicated to the dialer subsystem, or they can be shared (not at the same time, of course) with regular terminal activities. For example, a modem line can be used during the day for remote logon, and used at night for data transfer. The serial devices are identified by a unique name. See the section 'Sharing a Serial Device' below.
Serial devices can be designated as input only (they cannot call another system), output only (they refuse incoming calls), or mixed (they can either call or accept incoming calls). Input only channels are useful for configurations where a system may receive calls from many other systems. It reduces the probability that calls will be refused because the system is busy calling other systems. Note that an input only serial CAN transmit data to another system: it does so only after this other system has sent all it had to send.
IMPORTANT: The device MUST be able to support 8 bit characters to be able to transmit the Pick system delimiters ( char(253), char(254), char(255) ).


"IO Daemon"
An IO daemon is a Pick background phantom process which performs all IO. There is one IO daemon per enabled serial device used for the communication.

"Dialer"
A dialer is a program responsible for managing a communication device (a modem). It understands the particular protocol required to instruct the modem to dial a number, hangup, etc... A dialer is normally associated to a serial device. Currently, only the 'hayes' dialer is provided. It may be necessary to modify the dialer program to adapt to a special modem type.

"Update Posting'
For the dialer subsystem to transmit an update to other systems, the system must be made aware of the fact that an update took place. This is done by inserting a CALLX call to "dm,bp, dialer.post" in the correlative (attribute 8) of the D pointer of the file being updated. The dialer front end menu allows to do this on whole accounts. When an item is filed, the "dialer.post" subroutine compares the old and the new item and build a 'difference string' which describes what has changed: the original value, the old value and where it has changed. This not only allows a description of the change, but it also reduces the amount of data that has to be transmitted. The changes are determined at the value level. In other words, if the second value of an attribute has been changed, for example, only the second value is transmitted. If a subvalue is changed, the whole value is transmitted. If more than one change is made to an item, the changes are merged, so that only one record of the change is kept, and only one transmission occurs. For example, consider the item 'A' updated as follows:

Old New
--- ---
A: -----> A:
001 a 001 A
002 b 002 b
bb bbb
bbb 003 c
003 c
The data transmitted is:
- Change attribute 1 from 'a' to 'A'
- Change value 2 in attribute 2 from 'bb' to 'bbb'
- Delete value 3 in attribute 2

"Conflicts"
When a system receives an update from another system, the updates are applied to the local item. However, the receiving system checks for conflict by making sure that the 'old value' received from the remote system is identical to the 'old value' on the local system. For example, consider two systems M1 and M2, where the item 'A' is updated:
System M1
---------
A: -----> A:
001 a 001 a
002 b 002 b
bb BB
bbb bbb
003 c 003 c

System M2
---------
A: -----> A:
001 a 001 a
002 b 002 b
bb bb
bbb BBB
003 c 003 c

In this example, there is no conflict, and the resulting item will have both changes in the 2nd and 3rd values on the second attribute. However, consider the following:
System M1
---------
A: -----> A:
001 a 001 a
002 b 002 b
bb BB
bbb bbb
003 c 003 c

System M2
---------
A: -----> A:
001 a 001 a
002 b 002 b
bb BBB
bbb bbb
003 c 003 c

In this example, there is a conflict, because the system cannot determine whether 'BB' or 'BBB' is 'right' for the 2nd value of the third attribute. The update will be logged as conflict on BOTH systems, will NOT be applied to either data base and the System Administrator will have to resolve it by determining which, if any, is correct. A 'copy' operation can then be done to transfer the correct version to the other system. The dialer menu has an option to show the conflicts, at what times (adjusted by the different time zones) they occurred, and what the conflict is.

"System Map"
The system map defines to which systems an update is to be transmitted. The granularity is the account, but it is possible to define an exception list, to send selected files to a different system. Updates can be sent to more than one system. The system map is created using a dialer menu option. For performance reason, the system map is kept in a named common. Therefore, if the system map is changed, the user processes must be logged off and logged back on.
IMPORTANT: The dialer.post routine must be able to determine the actual account in which the file being updated resides. The account in which the application is run must either own the file (i.e., have a D pointer), or have a valid direct q-pointer to the actual file. Explicit path names (i.e., 'dm,bp,') are not permitted, nor q-pointers to q-pointers.

"Billboard"
The billboard is a system wide file which contains a record for each posted update not yet being transmitted. This file is checked every time an update is posted to be able to merge the various changes made to an item. On large systems, it might be necessary to resize that file.

"Spool"
The spool file contains the actual data to be transmitted. The spool items are referenced by other elements of the dialer subsystem.

"Queue"
A special queue file is created for each remote system. This queue contains a linked chain of requests to be sent to a given system. The body of the request is not stored in this file. In the case of a 'copy' operation (copy an entire item), the body of the item is either left in the original file, or is copied into a 'spool' file. In the case of an update item (result of posting an update), the difference string is stored in the spool file.


Sharing a Serial Device :

Mosts systems will not be able to have one serial port and modem dedicated to the dialer subsystem. More likely, the modem port will be used for remote maintenance, remote logon for most of the day. It is however possible to instruct the dialer subsystem to take over the serial port for a specified period, for example from midnight to 2:00 a.m. every day, to call remote systems, or to accept incoming calls from other systems. This way, the port can be used normally during the day. This is done when declaring the serial devices to the dialer subsystem, using the following convention:
- A dedicated device number is prefixed by a 'S'
- A shared device number is the Pick process number to which the device is normally linked to.
For example, 'S119' designates a dedicated serial port. '200' represents a shared serial device. The dialer subsystem, when it is time to establish a communication, will steal the device associated to the Pick process '200' (usually the device '200') by doing an 'unlink-pibdev' to free the device. If the process was logged on to a user, it is logged off prior to the stealing and the modem is hung up.


Dialer Programs :

A dialer program is a Pick/BASIC subroutine which handles the modem specific protocol. These programs must be stored and compiled in the file 'dm,bp,dialers'. They do not need to be cataloged in any account. The module 'hayes' handles the Hayes AT protocol. This module can be used as an example to develop custom dialer programs. The commands the subroutine must be able to handle are:
- DIAL$IDENTIFY
Return an identification string (type and version number). This string is for information purpose only. It should return at least the dialer name and version, and may interrogate the modem itself to get its type and revision number.
- DIAL$RESET
Reset the modem. This command must set the modem in an appropriate state for a dial command to succeed and to accept incoming calls. This command may be sent in the middle of a communication, therefore, it must hangup any on going session.
- DIAL$CALL
Dial out. The argument is the phone number, including any special characters required for the modem and/or the phone system (prefix, pauses, ...). The return code, is positive, is an indication of the communication speed, expressed in bauds. If unknown, the dialer must return 0 for a successful dial out.
- DIAL$HANGUP
Hangup the modem. This command is sent in the middle of a communication to interrupt it.
- DIAL$DEFAULT
Reset the modem to the factory defaults.
Syntax
Options
Example
Purpose
Related tcl.dialer
tcl.dialer-copy

tcl.level.pushing

Command tcl.level.pushing Definition/TCL
Applicable release versions: AP
Category TCL (746)
Description a term used to describe the ability to interrupt a process, invoke a "new" TCL prompt, and execute any valid TCL command.

If a <return> is issued at the "new" level TCL prompt, control returns to the previous level, exactly where it left off.

It is possible to "logto" another account while at a "pushed" level. When a <return> is issued at the TCL prompt, the process automatically returns to the original account.

Any tape or peripheral storage devices attached to the process when logging to another account at another level remain attached in the new account.
Syntax
Options
Example
Purpose
Related levels
tcl.esc-data
tcl.brk-debug
tcl.esc-level
basic.break
tcl.brk-level
tcl.end
prompt.chars.ap

tcl.network-setup

Command tcl.network-setup Verb: Access/TCL
Applicable release versions: AP 6.2
Category TCL (746)
Description allows the setup and control of a Pick network. If no arguments are supplied, a menu is displayed. This is the normal form of operation. See the section "network, General", for a discussion of the important notions of the network configuration.


Using the menus :
All operations are controlled through menus. If the terminal allows it, arrow keys can be used where indicated:
ENTER Validate the highlighted choice.
number From 0 to 9. Select the corresponding choice. '0' selects the option 10.
CTRL-N Move cursor down. (down arrow)
CTRL-B Move cursor up. (up arrow)
CTRL-X Cancel. Applicable only when input is requested.
ESC Quit. Go back to previous menu, or back to TCL. This key can be used to terminate all menus.
Q Quit. Go back to previous menu.
X Exit. Go back to TCL from any menu.

When the cursor is moved to a new field, a short help is displayed in the message area.


Screen layout :
The screen is divided in two sections:
- The menu section, where menus are displayed.
- The message section, where results, messages or help are displayed.


Definitions :
host
A definition for a local or remote Pick machine. There should be one for each system.


Main Menu
1 Start network servers
Starts up all local network servers. Before running this command, it is necessary to define the network using the "Define local hosts" and "Define remote host" options. The server status is automatically displayed after network servers are started.

2 Stop network servers
Stops all local network servers.

3 Server status
Displays the status of all local servers.

4 Server statistics
Displays transaction statistics for all local server processes, such as reads & writes per second.

5 List all hosts
Displays an access listing of all defined hosts, both the local and all remotes.

6 Print all hosts
Lists all hosts to the currently selected printer.

7 Define local host
Configure the local netork hosts. If you are defining an entire network, use option 9 Define all hosts. The following input fields are requested:

Host Name :
Any alpha-numeric string. This will be the Pick name of the local host. This field defaults to the name of the Unix machine if no local host is defined.
Optional Host Description :
An optional description of the host. The physical description of the machine or a description of it's location, or the name and number of it's system administrator would be helpful here.
TCP Name/Address :
The TCP Name/Address where the Pick machine exists. For the server, this should be the local Unix host name.
TCP Service Name/Number :
The TCP service number. Normally, the default of "pnfs" is correct as this is the default Pick service name. An alternate service is only necessary if there is more than one Pick virtual machine on the same Unix box.
Options :
These are driver-specific options. These may be left blank in the simplest case.
Transmit timeout :
Time (in seconds) allowed for each remote operation. After this time, the client assumes the server is down and follows the error path. This value should be increased when the server is slow and 'no response' errors are randomly encountered.
Accept Timeout :
Time (in seconds) during which a server process remains bound to a client when no operations are performed. A setting of 0 disconnects after every operation and is not recommended. For most situations, a value between 1 and 10 will provide a good balance between too many TCP connection in TIMEWAIT state and too few server processes. Note that server process remain bound to it's client when it holds item locks for that client, regardless of the Accept Timeout value.
Initial Server Processes :
The number of server processes that are started when the network is started. To change the number of servers, you must shut down the network, change this number, and then restart the network.
Host ID Number :
The host ID number is used to differentiate between different virtual machines on the same unix host. If you only have one Pick virtual machine on each unix machine, this number is not needed. If you have more than one Pick virtual machine on a unix host, then pick small numbers (less than 128) for each virtual machine on that host.
Confirm (y/n/q) :
'y' to confirm the host modification. 'n' to go back to any of the previous fields. 'q' to quit and abandon.

8 Define remote host
Configure the local netork hosts. After choosing this option, another menu appears asking which host to edit as well as a "New Host" option to create a new host. Select the "New Host" option when intially configuring a network. Next, the following input fields are requested:

Host Name :
Any alpha-numeric string. This will be the Pick name of this remote host.
Optional Host Description :
An optional description of the host. The physical description of the machine or a description of it's location, or the name and number of it's system administrator would be helpful here.
TCP Name/Address :
The TCP Name/Address where the Pick machine exists.
TCP Service Name/Number :
The TCP service number. Normally, the default of "pnfs" is correct as this is the default Pick service name. An alternate service is only necessary if there is more than one Pick virtual machine on the same Unix box.
Options :
These are driver-specific options. These may be left blank in the simplest case.

The host ID number is used to differentiate between different virtual machines on the same unix host. If you only have one Pick virtual machine on each unix machine, this number is not needed. If you have more than one Pick virtual machine on a unix host, then pick small numbers (less than 128) for each virtual machine on that host.
Confirm (y/n/q) :
'y' to confirm the host modification. 'n' to go back to any of the previous fields. 'q' to quit and abandon.

9 Define all host
Configure all hosts on a Pick network. Use this option to configure an entire Pick network from one station. This option, when combined with the next three, allow a user to enter configuration data for all hosts, both this local one and all remotes, then dump the information to tape. That tape is then loaded on each seperate virtual Pick machine. See menu optio 7 for details for each prompt.

10 Dump host file
A tape is selected, and network items in the dm,hosts, file are dumped to it.

11 Load host file
A tape is selected, and items prepared by option 10 above are loaded from that tape.

12 Declare local host
After network host items are loaded from tape during option 11 above, the user must tell the system which host item describes the local Pick virtual machine. This option presents the user with a list of hosts, the user pick's one the one that describes this machine.

13 Exit
Exits the program


Non Menu Operation :
It is possible to perform some operations from TCL by specifying a 'command' on the TCL line. This for is useful to perform some automatic commands in macros.

'command':

start
Start the local server processes.

stop
Stop all local server processes.

status
Display the status information for all local server processes.

statistics
Display the transaction statistics for all local server processes.
Syntax network-setup {{command}}
Options Q Quiet. Valid only for the non menu operation. Supresses all messages.
Example
Purpose
Related tcl.:kill-network
tcl.:start-network
tcl.:kill-node
tcl.:restart-node
tcl.:init-network
tcl.net-status
tcl.network-status

perf

Command perf Definition/General
Applicable release versions: AP/Unix
Category General (155)
Description describes various tips, utilities and performance monitoring tools which allow identifying possible bottlenecks in a given configuration.

Introduction

When performance problems are experienced on a system, it is necessary to distinguish problems due to the Unix environment and problems due to a configuration not adapted to the application.

The reader is assumed to have a fairly good understanding of a Pick environment and some knowledge of Unix.

Overview

Unix related performance problems are usually punctual: at one given time, the system performances degrade noticeably, but overall performance should remain satisfactory. These problems are usually fairly easy to track and to fix.

Configuration problems are more insidious, in that they appear repetitively under some circumstances. The basic principle is to monitor the activity of the system over a long period of time during normal system activity. A series of statistics are taken and stored in a log file for later analysis.

The command to monitor the activity is buffers. The command to display the log file is buffers.g.

Unix Related Bottlenecks

The first elements to look at are the results provided by sar to eliminate configuration problems due to an unexpected Unix activity alongside with the Pick activity. Device related problems may also have very visible effects on the overall performance.

SAR Results

See the section 'System Activity Reporting' in the chapter 'System Administration' in the Installation or User's Guide for more details about sar.

CPU usage:

A well balanced system should have a high percentage (above 80-90%) of user cpu usage. High system mode usage indicates too many process switches, or too many system calls. A non null waiting for IO cpu usage indicates disk bottleneck. If the system cpu usage becomes very high, without high IO activity, this may indicate a device problem (see next section).

Paging activity:

The absolute golden rule is to avoid swapping (paging) during normal operations. To avoid swapping, the physical memory must be increased, or the amount of memory allocated to Pick decreased. Surprisingly, if the system swaps, Pick performances may improve by reducing the amount of memory allocated to Pick in the configuration file. Obviously, there are some lower limits which should not be crossed. The Pick activity monitoring should allow determining how far it is possible to go on that path.

If possible, avoid using costly Unix commands during peak hours (compiling is painful, X-window requires a lot of memory, etc...).

If some significant swapping is taking place, control that the memory allocated to Pick (see the verb what) is not bigger than the total amount of physical memory minus the minimum size of memory required for the Unix Kernel (from 2 megabytes for SCO Unix to 6 megabytes for AIX, depending on the Implementation).

To identify which processes are running, do the following (as 'root'):

ps -edalf | grep R

S UID PID PPID STIME TTY TIME CMD
R root 4719 1 ... 07:08:53 24/0 0:05 ap - 24 tty24
R root 8999 10534 ... 07:58:33 89/0 0:00 ps -edalf
S root 10534 4133 ... 08:58:33 89/0 0:00 grep R
R demo 26242 25467 ... 07:10:03 75/0 0:16 demo

The above example shows an extract of the result. This shows that the process 4719 runs Pick on the PIB 24. The process 26242 is a non Pick process which has used three times as much CPU as the Pick process did. By running this command several times, if some processes show several times, it will be possible to identify processes that may be should not be running during peak hours.

Device Problems

The most common problems with TTYs are due to incorrect cabling. When Unix tries to spawn a process (Pick or Unix) attached to a terminal, the device must be ready. If not, Unix 'waits' a bit and tries again. Worse, a port with a DCD in an unstable state can generate many interrupts, which, in turn, generate 'hang up' signals, creating a very important system load. To identify such problem, do the following (as 'root'):

ps -edalf | grep '?'

S root 4184 9047 ... 09:06:26 89/0 0:00 grep ?
S root 25185 1 ... 07:08:52 ? 0:00 ap - 9 tty9
R root 30571 1 ... 07:08:52 ? 23:45 ap - 19 tty19 printer

This command shows the process attached to terminals the system could not open. In the above example,the second line shows a Pick process (pid=25185), in a sleeping state (S): this process does not consume any CPU. The system could not open the terminal /dev/tty9, but the system abandoned tyring to open it. The third line shows a Pick process (pid=30571), in a running state (R): this terminal does use CPU, as the CPU usage '23:45' shows. The system tried to open the device /dev/tty19, failed, as in the first case, but, probably, the cable is incorrect or hanging loose at the other end, and is generating constant signals.

To fix this situation, the terminal must be connected properly or the associated entry in /etc/inittab turned to off instead of respawn. Unfortunately, it is sometimes very difficult to identify which device is in trouble when the above command does not show it explicitly. Only careful checking of the cables or trying to find which ports which did not start as expected, will allow, by elimination, to find the faulty port.

Identifying Configuration Problems

Statistics

The following elements are monitored by the buffers command:


Name Description

Activ Number of Process activations. Each disk read, keystroke, process wake up after a sleep increments this counter. When the number of frame faults is subtracted from this counter, this gives an idea of the volume of data entry.

Idle Idle time. Not supported on Unix Implementations

Fflt Frame faults. This counts the number of disk reads.

Writes Disk Writes. All writes are normally done by the background flush process to update disk from dirty frames in memory. A high number indicates either a lot of updates, but also may be an insufficient memory allocated for the Pick virtual machine.

Bfail Buffer Search Failures. This counters counts the number of failures to allocate a buffer in memory for a new frame. When non zero, this indicates that the memory is insufficient. This counter should never be non zero.

RqFull Disk Read Queue Full. Not supported on Unix Implementations

WqFull Disk Write Queue Full. This counter counts the number of instances where the flusher cannot keep up with the dirtying of frames. This is an indication that either the write queue is too small for the given configuration (see the section 'Flusher Adjustments' later in this appendix) or that the memory is too small.

DskErr Disk Errors.

Elapsd Elapsed time. This is the time in seconds between two sampling. For internal use only.

DblSrc Double Search. This counts the number of collisions between two or more processes frame faulting on the same frame at the same instant. A non zero counter should be exceptional.

Breuse Buffer Re-Use. This counts the number of instances where a memory buffer has been allocated by one process to read one FID and another process allocated the same buffer to contain another FID. A non zero counter should be exceptional.

Bcolls Batch Contentions/Collisions. This counts the number of collisions between a 'batch' process (i.e., a process which is disk intensive) and an 'interactive' process (i.e., a process which is keyboard input intensive). By default, Pick insures that interactive processes are given priority over batch processes in accessing certain resources. See the section 'Batch Processes' in this appendix for more details.

Sem Semaphores Collisions. This counts the number of collisions between two processes trying to access a systemwide internal table.

Vlocks Virtual Locks Failures. This counts the number of cases when a Pick process tried to assert a virtual lock and failed to acquire it because another process had it.

Blocks FlashBASIC or Pick/BASIC Locks Failures. This counts the number of cases when a Pick process tried to assert a FlashBASIC or Pick/BASIC lock and failed to acquire it because another process had it.

B0reg Buffers with no Virtual Registers attached. These are the buffers not currently attached for immediate reference. At any given time, very few buffers are actually attached. It is therefore normal that this number be almost equal to the total buffers in memory.

B1reg Buffers used by more than one process, but not used by its owner any more. These should be in very small number.

B2reg Buffers used exclusively by their owner. On RISC implementations, this situation allows better performance, because there is no conflict on these buffers. Normally, these buffers contain private workspace, data which is not shared, etc...

B>3reg Buffers used both by their owner and other processes. This number represent the number of pages actually shared among processes (data files) at any given time.

ww Write Required. This counts the number of buffers currently modified and not yet written to disk.

IObusy Buffers being read from disk. This counts the number of pending disk reads. This counters is usually null, since the reads are too fast to be picked up.

Mlock Number of buffers memory locked. If the ABS section is locked, this number is at least equal to the ABS size. Also included, are the tape buffers when the tape is attached.

Ref Referenced Buffers. This counts the number of buffers which have been recently used.

WQ Write Queued. Number of buffers currently enqueued for write.

Tophsh Top of Hash. This number measures the quality of the hashing algorithm used to find a frame in memory. This number must be high (above 60% of the total buffers).

avail Available buffers. Number of buffers candidate for replacement. These are the buffers that nobody has been using recently. When this number drops below 10% of the total buffers, performance decreases significantly.

batch Batch Buffers. This is the Number of buffers used by batch processes. A high level (something approaching 50% of disk buffers) indicates that disk intensive activity is taking place by batch processes.


Activity Log File

The activity log is stored in the file buffers.log with a data level per weekday (buffer.log,Monday, buffer.log,Tuesday, etc... ). The file is created automatically when the buffers (H) command is used for the first time. Each data level is cleared when changing day, so that the file records a whole week of activity automatically. The itemid is the internal time on five digits.

The buffers command also creates automatically the dictionary attributes corresponding to the various counters, as shown in the table above. The attribute TIME displays the sampling time.

The attribute DESCRIPTION in the D pointers Monday, Tuesday etc... contains the date.

The file is created with a DX attribute.

Monitoring Activity

Logon to the dm account. Type:

buffers {(options}

options


C Clear todays log data level, when used with the (H) option. This option must be used the very first time. To restart the monitoring after having stopped it for a while, do not use the (C) option.

H{n} Record statistics in the log file. If followed by a number n, the process sleeps n seconds between each sample. The default value is 5 seconds. When sampling over long periods, 5 minutes (300 seconds) are a good compromise between accuracy and volume of data.

L{n} Loop sampling and displaying statistics. If followed by a number n, the process sleeps n seconds between each sample. The default value is 5 seconds.

S Display system counters. Without this option, a simplified set of counters is displayed. All counters are always recorded, even without this option.


Examples:

buffers

Take one sample of the non-system statistics.

buffers (sh300c

Loop displaying all counters, recording history and sampling every 300 seconds (5mn). The log file data level corresponding to today is cleared, thus starting a new session.

When looping, buffers polls the keyboard to detect the key "x" to stop or "r" to redraw the screen if it has been disturbed by a message, for instance. Any other key forces buffers to take another sample.

Displaying Log File

Raw display

The history file can be displayed by any access sentence. For example:

sort buffers.log,friday with time >= "11:14:00"

Histograms

The buffers.g command lists the log file as a series of histograms. The syntax is:

buffers.g cntr [day{-{day}}|*] {step {strt.time-{end.time}}} {(option}

cntr Statistic counter name (eg. fflt for the 3rd counter). Must be among the list shown in the table above. If the counter specified is relative to the buffers, percentages of the total buffers are displayed, rather than raw figures.

day Day{s} to list. The day can be one day, expressed either explicitly (monday, tuesday, etc...) or a number from 1 (Sunday) to 7 (saturday). A range of days can be specified by specifying two days separated by a dash (-). If the second day is omitted, Saturday is assumed. The whole week can be listed by using an asterisk (*).

step Specifies the display time step as HH:MM{:SS}. All samples taken within the step are accumulated and averaged. If step is not specified or if the step is 0, or if the step is smaller than the sampling period in the log file, all samples are displayed.

strt.time Starting time. If no starting time is specified, 00:00:00 is assumed.

end.time Ending time. If no ending time is specified, 23:59:59 is assumed.


Options


P Direct output to printer.

Examples:

buffers.g fflt * 01:00:00

List the number of frames faults (disk reads), for the whole week, by step of one hour. In the example below, no history was recorded before Wednesday.

No log for Sunday

No log for Monday

No log for Tuesday

20Feb1991; Wednesday; Ctr=fflt, Step=01:00:00, Range=00:00:00-23:59:59

0 8848 17696 26544 35392 44240 53088 61936
+------+------+------+------+------+------+------+------+----
10:59:28 *************************
11:59:54 ***********************************************************
13:00:25 **********************************************************
14:00:52 ************************************
15:01:18 ***************************
16:01:49 ********************************************************
17:02:22 ***************************************
18:02:55 ******
19:03:32 ***********************************************
20:04:08 *************************************************
21:04:43
22:05:21 ***************************************************
23:05:55 *************

Number of samples : 155
Total : 622070
Average per period : 7.1999 / sec.
Max value : 88481
Peak time : 13:00:25

buffers.g ww monday-friday 00:30 08:00-17:30 (p

List the percentage of write required write required buffers, for the week days only, during business hours, by steps of 30 minutes.


Interpreting Results

After taking a significant sample, list the results with the buffers.g command . The most useful parameters to survey are:


Fflt This measures the number of frame faults. If this number approaches the disk bandwidth as determined by the manufacturer, the system becomes disk bound. Solutions range from increasing the memory allocated to Pick, to changing disks, or reorganizing the Pick data base on separate disks to increase parallelism.

Writes This number should stay about one third to a half of the number of frame faults. It is not 'normal' for a system to do more writes than it reads, under normal operation. If this is not the case, see the section 'Flusher Adjustment' in this article.

Bfail This number should never be non zero. If it is not the case, the memory allocated to Pick is definitely too small.

WqFull This number should not be non-zero 'too often'. If it is the case, and if the number of writes is too big also, there is an abnormal rate of writes. See the section 'Flusher Adjustment' in this article.

Bcolls If this number becomes too high, this indicates that a lot of batch jobs (like selects of big files) are done while other processes are doing data entry. It is also an indicator that indeed interactive jobs are receiving higher priority than batch processes. See the section 'Interactive - Batch Processes' below.

ww This number should never go above 50 % of the whole buffer pool. If this is the case, the flusher is probably not activated often enough. See the section 'Flushed Adjustment' below.

avail This number should never go below 10% of the whole buffer pool. If this is the case, memory must be increased or the flusher must be adjusted.


Flusher Adjustment

The flusher is a background process, started automatically at boot time, which scans the Pick memory and writes back to disk frames which have been modified. It is an important task, not only to ensure that data gets back on disk, but also to make room for new data. Usually, a process reads data, modifies it, but may not need it for a 'long' time. The flusher takes care of writing the data back on disk so that the memory can be reused to read in other data.

This 'cleaning' of the memory is done:


- Periodically, when the disk is not active. If the disk becomes inactive 'for some time', the flusher wakes up and scans the memory writing back all it can unless another a process requires a disk access. This period is defined by the flush statement in the configuration file.

- On demand. When the memory gets 'full', i.e., when a lot of pages in memory have to be written back to disk, the flusher wakes up immediately.


The more often the flusher gets awakened, the more often memory is written back to disk. But this creates disk activity, thus decreasing the disk channel bandwidth available for 'useful' work, and CPU activity, therefore adding system load. Another catch to a high frequency flush is that data which is being modified (workspace, select lists, etc...) may be written several times on disk when only the last time would have been necessary.

The verb set-flush allows changing the flush period (see the section 'TCL commands' in this document. Increase this period, checking with buffers that the 'write queue full' events remains low and that the number of available buffers does not drop too low. Normally, the system is self regulating, increasing the flush frequency in case of high memory usage, so there is no need for a low flush period. 30 seconds should be a high limit.

The configuration file also contains the statement dwqnum which defines the length of the internal write queue. Increasing this queue reduces the probability of the situation in which the flusher awakened on critical demand, thus reducing the number of flushes. The down side to increasing the write queue size is that the flusher works by 'bursts', which may overload the disk channel when this phenomenon occurs. This parameter cannot be changed dynamically, which makes a bit more difficult to monitor.

Interactive - Batch Processes

Pick user processes are divided into two classes, depending on the type of activity they have: interactive processes are processes which typically do keyboard inputs 'frequently'; a batch process is a process which has little keyboard activity, require a lot of disk i/o, and/or is CPU intensive.>The system automatically discerns which type of process is running based on internal statistics.

The System Adminstrator can bias and/or override the default parameters used by the prioritization mechanism. Though not recommended, one can even force any processes, regardless its process activity, to be seen by the system as "interactive", for example. This can be changed dynamically on a per process basis via the set-batch command Also, the TCL command set-batchdly allows the displaying and setting of global values used in the queueing of certain types of process activity.
Syntax
Options
Example
Purpose
Related tcl.what
tcl.set-batchdly
tcl.set-batch
tcl.syschk
tcl.buffers
unix.performance

tcl.ap.unix

Command tcl.ap.unix Verb: TCL2/Unix
Applicable release versions: AP/Unix, AP 6.0
Category Unix (24)
Description starts the Pick virtual machine or a Pick user process.

-0
Starts the virtual machine. This process has the responsibility to initialize the virtual machine. All other processes will wait for the coldstart to complete its initialization before actually starting. Once the virtual machine is started, the line 0 can be disconnected, by typing 'exit' or 'disc', on any other line. Issuing 'ap -0' again will simply reconnect to the virtual machine if is is booted.

-a bootarg
Starts a virtual machine automatically. This process has the responsibility to initialize the virtual machine. All other processes will wait for the coldstart to complete its initialization before actually starting. This option is similar to the "0" option, with the difference that the system does not prompt for a boot option. Instead, it takes one-character commands from "bootarg" among the boot options: "x", "f" or "a". When encountering the "x" command, the system polls the keyboard for a period of "bootsleep" seconds (the default is 3 seconds) for a user intervention. If any key is pressed during this short period, the system defaults to a manual boot. "bootsleep" can be redefined in the Pick configuration file, by adding the statement 'bootsleep n' or by using the TCL command "config options". "bootarg" is a string of commands as if typed by the operator for a manual boot. When the command involves a tape, the tape is assumed to be ready. Therefore, the "c" for continue should not be included in the string.

- port.number
Starts a user process. Expressed in decimal from 1 to the maximum allowed number of users, this starts a Pick process on a given port when it is necessary to control the port on which process is running. If the port is not given the system allocates the first available port.

-q
Query. This command can be executed by any process to get information about the specified virtual machine. If the virtual machine is started, then the command returns exit code "0" to the shell, otherwise a value of "1" is returned. This allows testing the existence of the virtual machine from the shell.

-k
This command kills all processes attached to the specified virtual machine. First, a Pick logoff is attempted, followed by a terminate signal, which should send the process back to Unix. If the terminate signal has no effect, a kill is attempted which removes the process. This command should be used only in extreme situations. This is not a normal way to stop a virtual machine.

-n configfile
This specifies the name of the configuration file. If not given, the default file name is "pick0" in the current directory or alternately in the "/usr/lib/pick" directory.

-t tty
This option specifies which port is to become the terminal for the process. The device is assumed to be on the "/dev" special files directory. If not specified, the terminal is "stdout/stdin", unless it has been redirected, (the usual case on AIX systems). This option is normally intended to be used only in the "/etc/inittab" file. When this field is present, the system assumes the user process is started automatically, and, behaves a bit differently when starting: it waits for line 0 to start, if it is not started already.

-y sttyarg
This option allows changing the default port setting for the process. "sttyarg" is any "stty" argument. If more than one element is changed, they must be separated by spaces and the whole argument enclosed between double quotes (see examples below). When the process terminates, the port characteristics are not reset to their original values.

-d dataarg
This option allows stacking data for the process once it is activated. "dataarg" is any string which contains displayable characters and commands prefixed by a back slash (). Note that the string is subjected to the normal shell parsing, thus, back slashes must be "escaped", or the entire string must be enclosed in single quotes. "dataarg" can be contained in a Unix file by using the shell command substitution mechanism (e.g. using "back quoting" mechanism: -d ~` cat /usr/lib/pick/logon~`). All line feeds in the input string are converted to carriage returns.

The 'dataarg' commands are:

r Insert a carriage return.

f Turn echo off. The stacked data will not be displayed.

n Turn echo on. The stacked data will be displayed.

m Wait until the Pick virtual machine enters multiuser mode. The "maxusers" TCL command should be included in the "user-coldstart" macro after all system and application initialization are completed.

Insert a back slash.

When activating a process for the first time, the system reads one character, then empties the type ahead buffer. Therefore, stacked data should always start by the sequence r, followed by the real logon sequence.

-l
Retains login Unix user-id. This option logs the process as the same Unix user as the one used to log in to Unix. This option overrides the user definition contained in the configuration file. A Pick process started with this option does not have access to the Pick spooler, nor to the message facility (or at least very restricted). The user is in some way isolated from the other Pick users. This option should be used only for users who want their own Unix environment underneath the Pick process or to do system configuration which requires 'root' access.

-D
Enables Monitor Debugger. On entry, the process enters the Monitor Debugger. This option is ignored if the process started is a phantom or a printer. Type g<return> to actually start the process.

-s
Enables "silent" mode. If used along with the "-0" command, the Pick machine will boot, and return directly to Unix. If used on a normal line, output of logon and logoff messages and user macros is suppressed and any attempt to logoff will return directly to Unix. Because output is supressed, the "-s" flag must generally be used along with the "-d" flag followed by a string containing the user name, user password, MD, and MD password when applicable so that the user is logged into Pick automatically. The "-s" flag is used by the "tcl" Unix shell script.

-i nice
Set the relative priority of the Pick process, compared to other processes, Pick or not, running on the system. Legal values of 'nice' are -20 to +19. -10 gives the highest priority, +19 the lowest.

-W
Wait for the device specified by the '-t' option to be created. Available only on AP 6.1.7 or later. This option instructs the Pick monitor that the device does not exist yet, thus avoiding the 'no such file' error. This is used in configurations where the '/dev/' entry is created dynamically by a Unix daemon (for example on HP-UX using a DTC). The process polls the specified device and waits until it is created as a pipe or block or character device. This option is ignored if the '-t' option is not specified as well.

-printer
This option specifies that the port is to be used as a printer. It suppresses the messages 'Connected to virtual machine',and 'Disconnected from virtual machine', but does not suppress the normal Pick processes messages, like the Pick logon message.

-pprinter
Parallel Printer. This option specifies that the port is to be used as a printer on a parallel device. This option must be used on Unix implementations where the device is a write-only device, such as in the case of AIX. If the parallel printer is a Read/Write device, this option is equivalent to "-printer".

-spooler
-scheduler
-phantom
Any of these three options specifies that the port is to be used as a phantom, or as a process not attached to a physical port.

-u /ttelnet.port,s
Starts AP telnet on telnet port telnet.port. This option starts the AP telnet server on the telnet port number specified by telnet.port and waits for connection from a client. The client makes the connection by using telnet on the server host and the same telnet.port.
Syntax ap {{-[0|a bootarg|port.number|f|q|k] }{-n configfile} {-t tty} {-y sttyarg} {-d dataarg} {-l} {-i nice} {-[printer|pprinter]}} {-D} {-s} {-W} {-u /ttelnet.port,s}
Options
Example
ap -0
Starts the virtual machine 'pick0' (default name).

ap -n mymachine -0
Starts the virtual machine 'mymachine'. 

ap
Starts a User process on the virtual machine 'pick0' (default name 
' '), on the first available port. (default PIB ' '). 

ap -5
Starts a User process on the virtual machine 'pick0' (default name 
'-'), on port 5. 

pick -n mymachine -7

Starts a User process on the virtual machine 'mymachine', on port 7. 

ap -q
Displays information about the virtual machine 'pick0' (default 
name). 

ap -0 -t tty2
Starts the virtual machine 0 on /dev/tty2. This statement is normally included 
in the /etc/inittab file. 

ap -3 -t tty6 -y "9600 parenb -parodd" -printer

Starts a user process on /dev/tty6 as a printer. It changes the baud rate to 
9600 baud and sets the parity to odd. This statement is normally included in 
the /etc/inittab file. 

ap -a a3x
Automatically boots the virtual machine, then does an ABS restore from device 
3, then issues an "x" option. 

ap -d 'rmdmracctrterm ibm3151rmenur'
Starts a user process on the first available port, stacking commands to wait 
until multiuser mode is entered, and then log it as "dm" on the 
account acct, executing the "term ibm3151" and "menu" 
commands. Note the first 'r' to make sure the process will be logged 
on properly the very first time after a boot.

Shell script 'boot.ap':
001 # Test if VM is active. else boot it
002 ap -q > /dev/null
003 if [ $? ! -eq 0 ]
004 then
005    -a x
006 fi

This shell script uses the -q option to test whether the virtual machine is 
booted or not. If the 'ap -q' command returns a null exit code (ok), 
then the virtual machine is already booted and nothing is done. Else the 
virtual machine is booted automatically.

$su 
password:(enter 'root' password)
$ap -l

Enters the Pick virtual machine, retaining the current Unix user id (root, 
because of 'su').

$ap -7 -u /t2007,s &

Starts a background AP telnet server process which connects to the default AP 
virtual machine "pick0" on PIB 5 and waits for connection from 
client.  If the server host name is serverhost,  a Unix client makes the 
connection by using the "telnet serverhost 2005" command in Unix 
shell.
Purpose
Related port.number
unix.pick0
tcl.pick
tcl.kill
pid
tcl.psr
virtual.machine
tcl.maxusers

tcl.cvtcpy

Command tcl.cvtcpy Verb: Access/TCL
Applicable release versions: AP 6.1, AP/Unix
Category TCL (746)
Description is a utility to copy a tape to another tape, doing format and block conversions.
This utility reads Pick or Ultimate tape formast, with a block size up to 64K, and creates an AP tape.
Arguments are expressed in any order, as "keyword=value" and are all optional, except 'if':

if Input device Unix name. This field is required.

of Output device Unix name. If omitted, the data is dumped on the terminal or printer, in decimal, or hexadecimal if the (X) option is used. The output device or file must exist.

ib Physical input block size. This size is not necessarily the attached size. Some devices have header information in each block. If not specified, the input block size is obtained from the label, if present. The block size is expressed in decimal, in bytes or in kilobytes if followed by a 'k'.

tib Physical tape input block size. This size is the block size on the input tape. If not specified, 'tib' is set equal to 'ib'. Some devices have a physical block size which is dictated by the controller. For example, on HP-UX, reading a 16K block requires reading 32 512 byte blocks. IN this example, we would have ib=16k and tib=512. The block size is expressed in decimal, in bytes or in kilobytes if followed by a 'k'.

ob Physical output block size. This size is not necessarily the attached size. Some devices have header information in each block. The actual attached size is in fact ob-oo (see 'oo' below). If not specified, the output block size is set equal to the input block size after it has been determined. The block size is expressed in decimal, in bytes or in kilobytes if followed by a 'k'.

io Input offset. This offset specifies at which byte offset the data actually starts in the physical block determined by ib. If not specified, the offset is assumed to be null.

oo Output offset. This offset specifies at which byte offset the data actually starts in the physical block determined by ob. If not specified, the offset is set equal to io.

il Input label block size. This value specifies the size of the block in which the label is found. If not specified, the label block size is assumed to be equal to the input block size. On devices which support variable block length (1/2"), this argument can be considered as the maximum block size of a label block. On devices where the device writes data in fixed blocks (SCT), it is important to specify the label block size exactly.

ol Output label block size. This value specifies the size of the block in which the label is written. If not specified, the label block size is assumed to be equal to the output block size. The logical label is written at the beginning of the label block, at the offset 'oo', if specified.

reel Output reel number. If not specified, the reel number is extracted from the input label.

fwd Number of files to skip before reading.

skip Number of blocks to skip before reading. The label is not skipped, unless a (U) option is used. If both 'fwd' and 'skip' are specified, the 'fwd' operation is done first, and then the blocks are skipped.

seek Number of blocks to skip on the output device before writing. This option is supported only if the device is a direct access device (floppy or file).

count Maximum number of input blocks to tansfer. If not specified, the data is copied until an error occurs or two consecutive file marks are encountered. When the maximum number of blocks has been reached two file marks are written on the output device, thus truncating the data.

files Maximum number of input files to tansfer. If not specified, the data is copied until an error occurs or two consecutive file marks are encountered. When the maximum number of files has been reached a second file mark is written on the output device to terminate the tape properly.

itype Input device type. This option allows setting usual defaults for the device block size, label size and offset. These values can be explicitly specified to override the default. The valid types are: 'floppy', 'sct', '8mm', 'dat' or '4mm', 'half' or '1/2'. If the device is a pseudo floppy (Unix file), the device type is determined automatically and can be omitted.

otype Output device type. This option allows setting usual defaults for the device block size, label size and offset. These values can be explicitly specified to override the default. The valid types are identical to 'itype'.

el Embedded label length. This option allows reading labels that are embedded in a block of data instead of being located in its on block.

When possible, the label logical format is determined automatically.
Syntax cvtcpy {keyword=value} {(options}
Options F Do not write any filemark.

I Ignore block size on tape label, use values set by tcl parameters.

N Do NOT discard the first block on the second source reel. By default, cvtcpy assumes that the last block on reel was duplicated on the following reel, and discards it when reading. This option keeps the first block. If a CIE end or reel label (_R) is recognized, this option is set automatically.

P Outout to printer.

Q Quiet. Suppress all messages, except the final message and the error messages.

S Single reel. Disable the mechanism by which 'cvtcpy' attempts to cross reels on the source tape. Without this option, three consecutive file marks are interpreted as an end of tape.

U Unlabelled tape. No label is expected and none is written. With this option, the block sizes must be explicitly specified.

V Verbose. Displays more information about the process.

X Dump data in hexadecimal when there is no output device (dump to terminal or printer).
Example
cvtcpy if=/dev/rmt0.1 itype=half of=/dev/rmt1.1 otype=8mm

  Copy a half inch tape to an 8mm tape. All defaults apply (output block size 
16k, no offset, label block size=512).


cvtcpy if=/tmp/floppy (x

  Dump a pseudo tape (Unix file) on the terminal, in hexadecimal.


cvtcpy if=/dev/rmt1.1 itype=8mm of=/dev/rmt0.1 ol=512 ob=12000 oo=0

  Copy an 8mm tape to /dev/rmt0.1 (whatever it is), specifying a label block 
size of 512 (the actual label data is still 80 bytes), a physical data block 
size of 12000, and no offset (which means the attached block size is 
12000-0=12000).


cvtcpy if=/dev/rmt0.1 itype=half of=/dev/rmt1.1 otype=8mm fwd=1 files=3

  Copy 3 files, skipping the first file, from a half inch tape to an 8mm tape.
Purpose
Related

basic.print.on

Command basic.print.on Statement/BASIC Program
Applicable release versions: AP, R83
Category BASIC Program (486)
Description directs output to one of a number of open print files.

On AP, 100 print files may be opened at one time. The range of the "print.file.number", however, is between 0 and 32767.

The "print.file.number" has no connection to Spooler print file numbers. This number is logical and local to the current program and is used to group output.

Each logical "print.file.number" is assigned the next available spool job number, so it is possible to have the statement "print on 1 answer" output to job #3 and "print on 2 string" output to job #30.
Syntax print on print.file.number print.expression
Options
Example
print on 0 oconv(chk.dt,"d2/") "l#9":amt "r2*20"
Purpose
Related basic.statements
basic.crt
basic.printer
runoff.pfile
tcl.sp-assign
basic.print
basic.,

filename.devs

Command filename.devs Definition/Access: General
Applicable release versions: AP 6.1
Category Access: General (65)
Description defines the various devices in the system.
This file, located on the 'dm' account, has two data levels:

init This data level contains static information, which can be changed by the System Administrator, to set up the various entities in the system. For example, the initial baud rate of the serial ports, whether 'tandem' is allowed on the Pick ports or not, can be specified in this data level. It contains an item for all possible elements in the system, even if they are not physically present.

devs This data level contains dynamic information. The TCL command ':reset-async', normally run at boot time in the 'system-coldstart' macro, clears the file 'devs,devs', and copies the information from 'devs,init' into 'devs,devs'. Only the physically present system elements are represented in this file. The fact or writing an item into this data level actually programs the device. Items in this file may be added or removed by the "dev-make" or "dev-remov" TCL commands (AP/Unix only).


The item-id's in this file are the entity ids, which are composed of a one letter code ('p' for Pick ports, 's' for serial device, 'e' for Ethernet, etc...), followed by the entity number in decimal (the Pick port number, the serial device number, the TCP/IP connection id, etc...). See the section "entity" in this document for more information.
Both data levels can be examined using Access or the Update processor.
The structure of an item in these files is described in the include 'dm,bp,includes qcb.inc'.
Syntax
Options
Example
u devs,init s12
  Examine the characteristics of serial device 12. This brings up the Update 
processor, and allows changing the serial device initial setting. Filing this 
item does not actually change the setting. It will be changed the next time the 
command ':reset-async' is run (normally at boot time) or by running 
the command ':reset-async s12'

u devs p0
  Examine the characteristics of Pick port 0. The only element than can be 
changed is the 'tandem' flag, which controls whether tandem is allow 
or not on port 0, and whether the target process should be notified about the 
tandem on/off. See "tandem" in this document.

sort devs = "s]"
  Show the current status of all serial ports.
Purpose
Related tcl.tandem
tcl.converse
tcl.mirror
general.entity
tcl.:reset-async
tcl.system-coldstart
tcl.dev-make
tcl.dev-remov
tcl.:ent-list

tcl.block-print

Command tcl.block-print Verb: Access/TCL
Applicable release versions: AP, R83
Category TCL (746)
Description produces a "banner" by converting characters to a large block format, made up of rows and columns of the character itself.

If the text contains too many characters, the text string is wrapped at a word boundary, if possible; otherwise, the text is wrapped after nine characters.

Text enclosed within quotes attempts to print on the same line without breaking on the space(s) between the words.

The characters in the text string are defined in the "dm,block-convert," file.

A character definition must consist of exactly nine attributes: (example of definition for 'H' character:)

id H

001 7 ;* # horizontal cells: 1234567
002 C2,3,2 ;* raster line 1: HH HH
003 C2,3,2 ;* raster line 2: HH HH
004 C2,3,2 ;* raster line 3: HH HH
005 C7 ;* raster line 4: HHHHHHH
006 C2,3,2 ;* raster line 5: HH HH
007 C2,3,2 ;* raster line 6: HH HH
008 C2,3,2 ;* raster line 7: HH HH
009 B7 ;* raster line 8:

where:
"raster" is either:
Cnn {,nn {,nn...}}, or
Bnn {,nn {,nn...}}

such that:
Cnn indicates the id 'character' to be printed nn times.
Bnn indicates a blank to be printed nn times.
, indicates a switch from 'character' to 'blank', or vice-versa.

New items may be edited into the 'block-convert' file to create new languages, or even typefaces (such as script or italics). However, the height must remain at 9 characters (attributes).

Each word or passage is centered on the output line according to the width of the device to which it is being output. The device width is determined by the most recently executed "term" command.
Syntax block-print text {(options}
Options n No pause (nopage); suppresses pause at end of page on terminal display.

p Directs output to system printer, via the Spooler.

u Upper case option. If the banner character is lower case, the block character is made up of the equivalent upper case character.
Example
block-print "Eat At Joe's" Bar & Grill (p 

Without the double quotes around the first part of the banner, this command 
would fail with an "uneven number of delimiters" message. Secondly, 
the quotes around "eat at joe's" passage force the passage to 
appear on the same output line - side by side. Without the quotes around a 
passage, each word appears by itself on the line.

block-print Hi
HH   HH   ii
HH   HH
HH   HH  iii
HHHHHHH   ii
HH   HH   ii
HH   HH   ii
HH   HH  iiii
Purpose
Related tcl.p.option
filename.block-convert
tcl.n.option
tcl.options
tcl.term
tcl.termp
tcl.term-type

basic.common

Command basic.common Statement/BASIC Program
Applicable release versions: AP, R83
Category BASIC Program (486)
Description declares data elements to share among different Pick/BASIC modules.

The "common" (or "com") statement must appear before any variable. It is used to allocate variables to a common location, so that more than one program may have specified variables in a predetermined sequence.

Common variables (including dimensioned arrays) are allocated in the order they are declared. In the abscence of a "common" statement, variables are allocated in an undefined order.

Dimensioned arrays may be declared in a "common" statement by specifying the dimensions enclosed in parentheses. For example, "common a(10)" declares an array with 10 elements. Arrays that are declared in a common statement must be declared with a constant number of elements, and may not be redimensioned with a "dim" statement.

The "common" statement may be used to share variables between programs and other programs or subroutines. It may also be used in Pick/BASIC subroutines that are called from attribute-defining items. In this case, the values in the common variables are preserved between calls to the subroutine.

All standard variable types are allowed for common variables as well. The most frequent use of common is to store string or numeric values, but other types, such as file.variables or select variables are equally valid.

The order of variables in "common" statements is critical. The names of the variables are ignored and the order of appearance determines the association. The subroutine being called must have the same number (or less) values in its "common" statement than the main program.

The "/id/" option is used to specify a unique common area called a "named common" area. The "id" parameter must be unique within the program module where it appears. During execution, all program modules that declare named common areas using the same id reference the same variable space regardless of the location of the declaration within the program.

Multiple unique named common areas may be declared within the same program module. Named common space is preserved during an entire logon session.

All declarations of a named common in multiple modules must occupy the same amount of space (i.e., have the same number of variables and arrays, each array having the same number of elements). Multiple levels of a process share a given named common space which may be initialized at any level.

Arguments listed in both the "call" and "subroutine" statements should not be duplicated in the argument lists. Arguments that are also defined as "common" variables in both calling programs and subroutine programs should not be used in argument lists, since the data is already accessible through the common allocations. Violation of these rules can result in unpredictable values being passed between the programs.
Syntax common {/id/} variable{,variable...} {,array(dimension1{,dimension2})...}
Options
Example
The main program:

common x,y,z(10)
...
call process.it
for i = 1 to 10
print z(i)
next i
...
end

The "process.it" subroutine:

subroutine process.it
common x,y,z(10)
....
for y = 1 to 10
call get.input
next y
...
return

The "get.input" subroutine:

subroutine get.input
common x,y,z(10)
....
input x
z(y)=x
....
return

The variables, "x,y,z(10)", are global within a given main program 
and all of its subroutines.  Passing variables in common tends to be more 
efficient than passing them as subroutine arguments.

Example of named common usage:

program get.data
common /mydata/ name,zip
input name
input zip
execute "display.data"
end

program display.data
common /mydata/ name,zip
print name
print zip
end

Both modules in this example are main programs.  All programs can share 
information declared in a named common block.  The information stored in 
/mydata/ is valid until the user logs off, making it available to any other 
applications declaring a named common block of the same id.
Purpose
Related basic.statements
basic.precision
basic.enter
basic.dim
tcl.run
basic.assigned
basic.clear
variables
file.control.block
global.common
basic.performance
basic.call
basic.chain
basic.subroutine
basic.file.variable
named.common
basic.com

tcl.item

Command tcl.item Verb: Access/TCL
Applicable release versions: AP, R83, AP 6.0
Category TCL (746)
Description outputs the base fid of the group to which the specified item-id "hashes", and a list of all item-id's that are currently "hashed" to the same group.

Also displayed is each items' (hexadecimal) byte count field, a count of the total number of items in the group, the total number of bytes, the total number of "full" frames, and the number of bytes used in the "last" frame of the group.

If the given item-id is not found, the group to which it would hash is displayed.

Case sensitivity is an issue in the hashing algorithm used by the Pick system. If a file has a "d-pointer" type of "ds", item-ids are case-sensitive. This means two things: 1, that the item-ids "dog" and "DOG" are treated as two separate items and 2) they would most likely hash to different groups (unless, of course, the file has a modulo of 1). If case sensitivity is "off", "dog" and "DOG" are the same thing, and there can only be one dog. See the "case", "case-on", and "case-off" verbs, and the topic of "file-defining items".

As of AP release 6.0.0, a new feature has been added to the display of items in a group. In the columns between the address and the byte size, three possible letters may appear. They have the following meanings:

B Binary item. This normally occurs with Pick/BASIC object code found in the dictionary level of source files.

F File pointer. This occurs only with "d-pointers".

P Means that this is a "pointer" (indirect) item. This occurs when a file is designated as having a "dp" in its "d-pointer", or when the item size passes approximately eighty percent of the frame size on the system. (See the "what" command for determining frame size).
Syntax item file.reference itemlist* {(options}
Options n Activates nopage function on output to the terminal.

p Directs output to system printer, via the Spooler.

s Suppresses output of item-ids'.
Example
item testfile stuff
stuff
item not found
320710.0000     01BC  pc.v
1 items 444 bytes 0/444 frames in group 320710

This example illustrates the effect of not finding the item, yet showing the 
group to which the item would hash if it were there.
Purpose
Related tcl.group
tcl.itemlist*
tcl.case-off
tcl.case
file.defining.items
modulo.def
access.istat
access.hash-test
tcl.create-file
tcl.what
pointer.item

referential.integrity.b-tree

Command referential.integrity.b-tree Article/Article
Applicable release versions: AP
Category Article (24)
Description discusses the bridge processing code.

Contributed by Chris Alvarez. Original article ran in PickWorld Magazine.

One common problem in the management of a database is the ability to provide referential integrity. The postrelational database model, upon which Pick is based, allows the developer to develop relationships between files, thus eliminating the need for duplicate data.

This relationship in the Pick world is known as the translation processing code. This is what provides the automatic join capability in Pick. Using this processing code, the order file needs only to hold the customer item-id in order to build dictionaries that translate and retrieve any piece of information on that customer's item.

While this relationship is one of the major reasons for choosing Pick, it can also lead to problems in the area of referential integrity when the customer item that is used in 200 orders needs to be changed to a different item-id because of duplicate customer item for the same customer. Hence the Bridge processing code was added to Advanced Pick to insure referential integrity.
Referential integrity has been defined in many ways, but basically it is the ability for the system to maintain these parent/child relationships automatically. The bridge processing code makes it possible for the system to automatically handle the above situation.

When the customer's item-id is changed to a new item-id, every order with a reference to the old number is automatically changed to the new number. For our first example, create a CUSTOMER and an ORDER file using the following commands at TCL:

CREATE-FILE CUSTOMER 1 1

CREATE-FILE ORDER 1 1

The next step is to add the bridge processing code in the d-pointer of the file's data section. First, use the following command to add the processing code into the CUSTOMER file:

UD CUSTOMER

This command is a macro that uses the Update processor to edit the data section d-pointer. The following is an example of the screen:

DICT customer 'customer' size = 55

dictionary-code D
base 611207
modulo 1
structure
retrieval-lock
update-lock
output-conversion
correlative
attribute-type L
column-width 10
input-conversion
macro
output-macro
description
reallocation
hotkey.all
hotkey1
.
.
.
hotkey0

Use the return key and move the cursor down to the correlative attribute and add the following bridge processing code:

border;10;1

This processing code tells the system that each time an item is filed into the CUSTOMER file, use the order item-id in attribute 10 to add the customer item-id to attribute 1 of the ORDER file. This is an example of a single threaded bridge. The bridge may be made double threaded by adding the following line to the d-pointer for the data section of the ORDER file:

bcustomer;1;10

This processing code instructs the system to do just the opposite. Each time an item is filed into the ORDER file, use the item-id in attribute one to verify that the order item-id is in attribute 10 of the CUSTOMER file.

Bridges, just like b-tree indexes, are updated regardless of the method used to update the file. To see the processing code work, add a few customers to the CUSTOMER file. Next, enter an order into the ORDER file, using one of the customer item-ids on attribute 1.

Each time an order is filed, the order number will appear on attribute 10 of the customer's item. Try copying one of the used customer items to a new item-id. All of the items in the ORDER file that referenced that customer item- id will be changed to the new one.

This feature is also available from Pick/BASIC with one simple statement. The "replace" statement replaces all occurrences of one item-id with a new one. This comes in handy when writing software that removes duplicate items. Whenever duplicates are found in the CUSTOMER file, one item can be deleted from the file and the replace command can be used to change all of the references in the ORDER file to the new customer.

The bridge processing code can be used one step further to actually make updates to the database. The Updating Bridge processing code can be used to make inventory adjustments, keep running totals on chart of accounts, etc. Addition and subtraction can be performed on an attribute in another file using a value from the current item. To continue our example, create an INVENTORY file and place the following processing code on the d-pointer for the data section of the ORDER file:

binventory;2;1;3;+

This processing code will update the quantity available held in attrib ute 1 of the INVENTORY file. Each time an order is filed, the system will use the value held in attribute 2 of the ORDER file as the item-id in the INVENTORY file, the value held in attribute 3 of the ORDER file will be added to attribute 1 in the INVENTORY file.
Syntax
Options
Example
Purpose
Related

boot.error

Command boot.error Definition/Unix
Applicable release versions: AP/Unix
Category Unix (24)
Description describes the boot error codes, displayed in the message: Boot aborted Pick process terminated. Line 0. Code X (0xNNNNNNNN)

where

X is a decimal value
0xNNNNNNNN is the hexadecimal value of X


Name/Value Description/Note

>0 Signal number from 1 to 22. Normally, all signals are caught by the process. However, uncaught signals terminate the process. Normally all Unix signals are caught by the monitor. If this error happens, it is probably because a user-written C function restored the signal handler routine to SIG_DFL.

-n ( 1< n < 1000) Instruction diagnostic error. Optionally, the monitor instruction diagnostics are executed at boot time by the line 0. In case of error, the process is terminated with a negative error code. The absolute value is the instruction diagnostic number.

p_dskerr/-1000 Disk error. Unrecoverable. Check Configuration File. Common cause for this error is a disk size too small (third argument to the disk statement in the configuration file).

p_ioerr/-1001 Terminal I/O error. Using the following command, check read/write permissions to your terminal:
ls -l /dev/ttyXX

p_maxfid/-1010 MAXFID has changed. The monitor required confirmation, but the user decided to abandon. A file restore is necessary. This error happens when choosing the option 'X' on a disk which has no file or has damaged files. Check the Configuration file to make sure the size of the disks or the number of disks has not been changed.

p_sysbase/-1011 SYSBASE has changed. The monitor required confirmation, but the user decided to abandon. This error occurs if the number of PIBS is changed beyond the number allowed by the license. A file restore will be necessary.

p_flushst/-1012 The flush process has terminated, upon receiving a 'logoff' signal.

p_halt/-1013 Monitor HALT. The halt code and the halt address are displayed. Note the code and address.

p_nopib/-1014 No PIB available on the virtual machine. The allowed maximum number of users on the system has been reached. No new user process can be started.

p_twoflsh/-1015 Flush process already active.

p_vabserr/-1016 ABS is invalid or not loaded. Do an ABS restore (option 'A') and retry boot.

p_defect/-1017 Defect table has not been initalized. Delete disk and reload abs and files.

p_defectr/-1018 Defect table can not be read. Disk needs to be reformatted and both abs and files reloaded.

p_bactkey/-1019 An error occured while trying to activate the machine. Possible reasons are:
Installer does not agree with terms.
Invalid activation key entered.
Attempt to activate more pibs than license allows.

p_badsksz/-1020 The size of the disk in the configuation file is invalid. Check the size and change the configuration file.

p_ra/-2001 Global MCB address error. The monitor is incompatible with the Unix version. A different monitor is required for Unix System V Release 4.
Syntax
Options
Example
Purpose
Related maxfid
pib
abs
tcl.config.options
sysbase

general.unix.q.ptr

Command general.unix.q.ptr Definition/General
Applicable release versions: AP 6.2
Category General (155)
Description accessing a Unix file from the Pick file system.
Through the 6.2 OSFI, it is possible to access Unix files as if they were Pick items, using Access, Pick BASIC, FlashBASIC, etc... This section describes the format of the q-pointer, the file structure and the access rules.

Conventions :

Since the Pick file system structure is fundamentally different from the Unix file systems, a few conventions have to be made to map an object from one file system to the other:

- A Pick item is mapped onto a Unix file.

- By default, the Pick attribute marks are converted to newline characters (decimal 10). This conversion can be optionally disabled.

- Again by default, if a Unix file contains the usual Pick delimiters, they are converted in a sequence of two characters, DLE (decimal 16) followed by a displayable character:
SM DLE _
AM DLE ^
VM DLE ]
SVM DLE
DLE DLE DLE

- Unix text files generally are terminated by a new-line, while Pick text items do not have a trailing attribute mark (The Pick equivalent of a new-line). By default, the terminating new-line (which would be converted to an attribute mark) is stripped when the item is read into Pick, and re-appended when that item is again exported to Unix. This provides a comfortable interface for text items, but IT WILL ADD AN ADDITIONAL NEW-LINE WHEN WRITING BINARY ITEMS. Therefore, this default mechanism must be disabled with the "A" option when modifying binary text, and especially when saving a Unix directory.

- Optionally, a section of white-space preceding a block of alpha-numeric text which aligns to a tab-stop can be replaced by the appropriate number of tab characters. This process is reversed if the item is re-transferred to Unix.

- A Pick file is mapped onto one or more Unix directories. The main data level of the file is mapped to a directory which has the same name. The dictionary of the file, is a sub-directory called ".DICT". Other data levels are mapped onto sub-directories of the ".DICT" directory, prefixed with a period. The dictionary is optional, and required only if if data is actually stored in the dictionary. If the dictionary is missing, the file system will open it, but an error will be returned (no item, on a read, and file write protected on a write) if the application actually tries to access items in it. For example, consider the Pick file 'bp', with its directory and two data levels "bp":

bp
|
+------+------+------+
| | | |
item1 item2 item3 .DICT
|
+-------+
| |
ditem1 ditem2

This seemingly complex structure aims at making the most common case (flat file) as simple as possible, and to make the internal objects ('.DICT' ) invisible by default to a Pick TCL 'list' or Unix 'ls' command.

Q-Pointer Format :

The format of the Unix Q-pointer is:
file.name
001 Q
002
003 unix:directory{]options}

'directory' is the name of the Unix directory onto which the main data level of the file is mapped. This directory can be any valid directory name (local directory, mounted Unix removable medium, NFS directory). Special files (device, pipe, etc..) can also be specified with some restrictions.

'options' is an alphanumeric string which controls the behavior of the driver. Spaces can be inserted in the option string for readability. It follows the directory name, separated by a Value Mark:
t n Convert white space preceding text aligned to a tab-stop into a series of tabs. By default no conversion occurs. Note that this conversion option may modify the data (especially binary items) and is therefore only suggested for text files.

A Specifies that an extra attribute mark always be added when Unix files are moved into Pick, and that that attribute mark always be removed when that item is placed back in Unix. This option is absolutely necessary when saving and/or copying between different files or to backup media. Without this, non-textual items may have an extra new-line appended to them when added to the final Unix destination.

c Specifies the target is a special character file. This option imposes some restrictions (see the section Special Files below).

n Suppress the conversion of Attribute Marks to New Lines. By default, when writing a Pick item, the AM are converted to make the text easy to edit with Unix editors. Note a trailing New Line is added at the end of the Unix file when it is written, unless this option is used.

s Case insensitive item-id and file names. With this option, the filenames and item-ids are converted in lower case, to make them case insensitive. See the section about case sensitivity below.

Item Locks :
Item locks are not supported.

Unix Q-Pointers to Special Files :
It is possible to specify a special character file (pipe, device) as the Unix directory, specifying the 'c' option in the q-pointer. However, there are restrictions:
- Special files cannot have a dictionary or other data levels.
- Only OPEN, READ, WRITE and CLOSE operations are permitted. DELETE is ignored. Sequential access (eg LIST) returns 'no item present'.
- When writing, there is no guarantee that the data is written as one block. This is specially important on pipes for which the notion of atomic write is critical.
- When reading, the device must be able to report the size of the data using the Unix system call 'fstat()'. For example, a pipe may appear empty (size 0) at one point, and then contain data. The application must be prepared to handle empty items.

Case Sensitivity :
When the 'S' option is specified in the Unix q-pointer, the filenames and item-ids are converted to lower case. However, the driver does not detect files that may exist in the same directory with a different case. For example /bp/TEST and /bp/test are two different items. The user must be very careful when using Unix tools to access files otherwise used from Pick through a Unix q-pointer with the 'S' option.
The data is the file is never converted.
Syntax
Options
Example
Examples :
1. Create a Unix q-pointer to a Pick/BASIC program file located on Unix:
pgm
    001 Q
    002
    003 unix:/home/dev/bp
        t4
Use the default conversion along with tab expansion.

3. Create a Unix q-pointer to a Unix directory to be saved as part of the 
regular Pick file save:
bob
    001 QS
    002
    003 unix:/home/bob
        a
Use the "A" or append option to keep an additional attribute mark on 
the Pick data (which is stripped when written back to Unix).  This extra 
attribute mark ensures that ALL data can be saved and restored without 
corruption due to translation.
Purpose
Related general.super.q.ptr
general.remote
filename.hosts
s-pointer

tcl.stack.definition

Command tcl.stack.definition Definition/TCL
Applicable release versions: AP
Category TCL (746)
Description describes the TCL command stacker.

In AP, every unique command that is typed at the ":" (TCL) prompt is saved in a file on the "dm" account called "tcl-stack".

TCL stack entries may be recalled and edited using the Update processor edit commands. The UP commands listed below are valid when editing items in the TCL-stack file.

Changing any part of a TCL command in the stack causes that stack entry to be moved to the top of the stack. This feature tends to keep the stack compact. The fact that only "unique" commands are saved also helps keep the stack compact. "Unique", in this context, means that there are never any duplicated commands in the stack. For instance, even if the "who" command had been used many times, it will only appear ONCE in the stack. Each time a command is found and re-executed, it is moved to the "top" of the stack.

There is no limit to the number of TCL-stack items, or to the number of attributes in each item. These items continue to grow indefinitely. Therefore, from time to time, the stack should be pruned either from TCL or by using UP to modify the actual stack item (u dm,tcl-stack, user-id ).

A user's TCL-stack is not terminal dependent. It is user-id dependent. If a user leaves a terminal unattended, another user can use the terminal under the previous user's id. This causes the new user to "step-on" the previous user's stack.

Since the TCL-stack is an updated file, it is frequently locked and updated by the system. If two users share the same user-id, then while the first user types a TCL command, the second user is "locked-out" of the TCL stack, and their terminal will beep as long as the first user is still entering the command.

In AP release 5.2.5, the stacker was enhanced to allow new commands even if the stack item is locked by another port.

As of AP releases 5.2.5 and higher, it is now possible to cut and paste when editing the TCL command stack item.

In releases of AP 6.1.0 and higher, it is possible to stack data or additional commands under the TCL statement by using the <ctrl>-v command.
Syntax
Options
Example
Purpose
Related up.r
up.t
up.w
tcl.stack-off
up.y
up.z
filename.tcl-stack
tcl.stack
tcl.stack-on
line.continuation.character
tcl.introduction
tcl.edit.commands
up.g
up.m
up.n
up.o
up.e
up.i
up.k
up.l

UE.61A2

Command UE.61A2 User Exit/PROC
Applicable release versions: R83 3.1, R83 3.0, R83 2.2
Category PROC (92)
Description toggles a system-level flag which controls visible output to the users terminal.

When terminal output is suppressed, this is equivalent to the PROC "ph" command.

The reverse of this operation would be "u3193", which sets terminal output ON regardless of its current setting.
Syntax
Options
Example
pq
u61a2
hsselect md
ston
hsave-list mx<
ston
p
u61a2
Purpose
Related proc.user.exits
ue.61bc
proc.ph
tcl.hush
tcl.p
filename.messages

attribute.defining.item.article

Command attribute.defining.item.article Article/Article
Applicable release versions: AP
Category Article (24)
Description Using processing codes in an attribute-defining item.

The Power of Advanced Pick Dictionaries, Part 2:
The Attribute-Defining Item

Contributed by Terri Hale
(original article ran in PickWorld Magazine)

Part 1 of this series described the features available to the Advanced Pick (AP) application programmer through extended processing codes invoked through the d-pointer or file-defining item.

This article will focus on the tools and features available to the application programmer and end user through the use of the AP data dictionary or Attribute-Defining Item.

A different approach to application development:

For the application programmer, writing applications in AP requires a different mind-set than in classic Pick or other Data Bases. Since most of the application functionality can be performed using the dictionaries, the first step in developing an application is to define the file structures. This includes defining inter and intra file relationships.

Once the file structures are defined, the Update processor can be used to build the dictionaries and enter Pick/BASIC subroutines.

After the dictionaries are defined and Pick/BASIC subroutines are written (most subroutines are no more than one screen long), the Update processor can be used to enter and retrieve data.

A different approach to data retrieval:

For the end-user, data retrieval is immediate when using B-tree indicies and the Update processor or Pick/BASIC. Once an index has been created and the index correlative has been defined, the user can access data for that attribute instantaneously.

Consider a name field with an index defined. To access a specific item, the user keys in a partial string and <ctrl>+f to 'cruise' forward in the data base for the person whose name begins with the given string. Another <ctrl>+f will bring up the next person whose name follows alphabetically.

This ability to immediately access data via indexed fields introduces a rather revolutionary concept. That is, item-ids are no longer necessary for data retrieval. You can find the item by 'cruising' through any indexed field - immediately. (There is no limit, other than disk capacity, to the number of indicies that can be created.)

Along those same lines, many printed reports can be replaced by giving users the capability to look-up the information needed. This can be accomplished easily using AP's menu structure and Update processor (with the "look only" option) on indexed dictionaries.

Listed below are some of the application related features AP's processing codes provide:

* on-line edit checking
* item modification using Pick/BASIC program calls
* output formatting using Pick/BASIC program calls
* local and remote indexing capabilities
* limit the number of values for input
* require operator input
* define "view" or display only attributes
* define default attributes to be viewed or updated when "zooming" or jumping from the current file to another predefined file

The following table shows what the processing codes are and from where they can be called.

OUTPUT INPUT
PROCESSING CODE CONVERSION CORRELATIVE CONVERSION
--------------------------------------------------------------
algebraic function x x x
ascending order x
attribute index correlative x
call BASIC subroutine x x x
character update x
concatenate x x
date conversion x x x
display only x
f-math correlative x x x
group extract x x x
length code x x x
mask character x x x
mask hexadecimal x x x
mask left & right justify x x x
mask time x x x
must input x
pattern match x x x
range x x x
remote index correlative x
substitution x x x
text extraction x x x
translate x x x
user exit x x
value code x
za x x
zip code x x x

In addition to the new processing codes, all of the existing R83 conversions and correlatives are available in Advanced Pick.

The rules for using processing codes:

A key to writing applications in AP, is knowing where to put the processing codes in the Attribute-Defining Item. In general, the following rules apply:

* Processing codes on output-conversion manipulate data immediately before output.
* Processing codes on correlative are used to preprocess data.
* Processing codes on input-conversion are applied immediately after entry of data.

The Attribute-Defining Item has four new dictionary attributes in AP. They are input-conversion, macro, output-macro, and description.

Note: In AP releases on or after April 25, 1991, there are an additional 11 dictionary attributes added to both the File and Attribute-Defining Items. These "hotkeys" are used to call Pick/BASIC subroutines from the Update processor. More on these later.

A new look to Attribute-Defining Items:

Let's create an Attribute-Defining Item for the orders file called customer. With this attribute, we will translate to the customer file for the name on output, utilize a remote index to look up valid customers (and optionally enter or update customer items), require input on this attribute, and limit the number of values allowed on this attribute to one. We will do all this without a single line of Pick/BASIC code. To view/edit the Attribute-Defining Item, the Pick/BASIC program "ud" is used:

ud orders customer
DICT orders 'customer'
NAME DATA DESCRIPTION
dictionary-code a Valid codes are a or s.
attribute-count 1 Position of data.
substitute-header customer.name User/programmer defined.
structure dependent structure.
output-conversion tcustomer.file;x;;1 Translate name on output
correlative
attribute-type l Valid types are:
l,r,t,u,w,ww,lx,rx,tx
column-width 10 Width for output.
input-conversion i Local index on
orders(customer)
icustomer;a1 Remote index on
customer(name)
mi Defines must input field.
v1 Only one value allowed.
macro name address zip Default attributes
output-macro Not used.
description Enter customer name On-line UP help messages.

That was easy!

Now, we'll create another attribute in the orders file called part#. This attribute will control another attribute in a controlling/dependent structure. It will also translate the description of the part from the parts file, call a Pick/BASIC program to display the quantity on hand, require input and translate to the parts file for valid entry. If this is a new part#, the ability to "zoom" to the parts file and enter a new part number is provided.

ud orders part#
DICT orders 'part#'
NAME DATA DESCRIPTION
dictionary-code a Valid codes are a or s.
attribute-count 2 Position of data.
substitute-header part.number User/programmer defined.
structure c;3 Controls attribute 3.
output-conversion tparts;x;;1 Translate
correlative call display.qoh Program to filter info
attribute-type l Valid types are l,r,t,u,w,ww,x.
column-width 25 Width for output.
input-conversion iparts;a0 Remote index
mi Must input field.
macro part# qoh price Default attributes
output-macro Not used.
description Enter part number. On-line UP help messages.

A faster application prototype:

After the orders file and the two dictionary items (customer and part#) have been created, the Update processor can be used to enter data into the file. The UP command to generate an operable data entry screen would be u orders customer part# . This screen would look like:

orders NEW ITEM
orders _________
customer _________
part# _________

When entering data into the orders file, the customer and part# attributes have all the abilities and restrictions that were designed into them.

The next example will show the 'hotkey' feature. That is the ability to call a Pick/BASIC subroutine from any attribute from the Update processor. Remember that a subroutine called from output conversion, correlative or input-conversion will be automatically executed at the default time by a carriage return.

To call a Pick/BASIC subroutine while in the Update processor on a particular attribute, type control x<0-9>. Typing control x<1>, for example will execute the subroutine called by hotkey1. Typing control x<2> will execute the subroutine called by hotkey2, etc... If no subroutine is called from that particular attribute, then the system looks at the file pointer for any defined calls. If any are present, they will be executed. Any subroutine calls on hotkey.all will also be executed by control x<0-9>.

ud orders state
DICT orders 'state'
NAME DATA DESCRIPTION
dictionary-code a
attribute-count 5
substitute-header
structure
output-conversion tstates;x;;0
correlative
attribute-type l
column-width 20
input-conversion istates;0
macro city state
output-macro
description
hotkey.all sub called by control x<0-9>
hotkey1 call list.states sub called by control x<1>
hotkey2 sub called by control x<2>
hotkey3 sub called by control x<3>
hotkey4 sub called by control x<4>
hotkey5 sub called by control x<5>
hotkey6 sub called by control x<6>
hotkey7 sub called by control x<7>
hotkey8 sub called by control x<8>
hotkey9 sub called by control x<9>
hotkey0 sub called by control x<0>

By modifying our UP command to add the state attribute, we have the following screen

u orders customer part# state
orders NEW ITEM
orders __________
customer _________
part# __________
state __________

In the example above, when the user types control x<1> at the state attribute, the subroutine list.states will execute. Control will then return to the Update processor.

Pick/BASIC program 'list.states'

subroutine list.states(value)
execute 'sort states city state'
return

To summarize, AP provides many features for both the application programmer and the end user through the use of dictionaries. When these features are used in conjunction with the Update processor, the possibilities are only limited to your imagination.
Syntax
Options
Example
Purpose
Related

basic.%kill

Command basic.%kill C Function/BASIC Program
Applicable release versions: AP/Unix
Category BASIC Program (486)
Description sends the signal specified in "signal" to the process "pid".

All Pick processes normally catch signals for their internal use. The built-in "%pgetpid" allows finding the PID of a process by knowing its port.number (pib). Only "SIGUSR2" should be sent to a Pick process. Other signals are used internally and may cause problems if used out of context.

"SIGTERM" will logoff the Pick process and disconnect it.

"SIGHUP" will logoff the Pick process, but leave it connected to the Pick virtual machine. This behavior can be modified by providing a user writtem signal handler. See the 'trap' command.

"SIGINT" will emulate a <BREAK>, possibly sending the Pick process to the debugger.

Signal numbers are defined in "dm,bp,unix.h signal.h".
Syntax variable=%kill(pid, signal)
Options
Example
Get its pid, and send hangup
* (SIGHUP=1) to it.
pib=32
pid=%pgetpid( pib )
if %kill( pid, 1 ) = -1 then
  print "Cannot logoff process ":pib
end
Purpose
Related tcl.trap
tcl.pid
basic.cfunc
basic.%pgetpid
port.number
basic.cfunction
pid
tcl.kill

tcl.set-imap

Command tcl.set-imap Verb: Access/TCL
Applicable release versions: AP, AP 6.1
Category TCL (746)
Description defines a keyboard input and/or a terminal output translation table, through which any sequence of keyboard input and terminal output characters can be translated into any other sequence of characters.
Input translation can be used to translate special key sequences, like "ESC [ A", into a sequence understandable by the application.
Ouput translation can be used to convert a character into an appropriate escape sequence, for example, to change fonts on a printer, print the character, and change the fonts back.

The translation is based on the notion of 'input sequence', which is a variable-length series of characters which must be received completely within a given time, typically 1/10th of a second, to be recognized as one key stroke, and converted into an output sequence. If an input sequence is not received within the specified time, or if a character received is not part of a valid sequence, the sequence is aborted, and all characters received so far are 'de-sequentialized' and passed to the application or displayed as a series of discrete characters.
When applied to a terminal output translation, there is no notion of timeout, but there are some restrictions (see the section 'Warnings' below).

The translation is described in a table which is associated to each port. See the REF documentation 'keyboards' for the format of the table. A table can be shared among different ports. Each input and output table contains a 'main' translation table and an optional 'alternate' table. The two tables have identical structure and capabilities. The main table is active when the translation is activated. A special input sequence can be defined to switch to the alternate table until it is switched manually back, or for one translation (keystroke) only.

An input sequence can have from one to 127 characters. An output sequence can be from 0 to 127 character long. If an output sequence is null, the corresponding key is made inoperative (input) or the data is not displayed (output).

Without a numeric option, the current port is affected. Optionally, a specific port can be specified by using the port number as a numeric option.

A <BREAK> key aborts any pending sequence and sets the main translation table as the active one.

If there is no argument, the translation mechanism, if currently active, is disabled.

"item.id" The item-id of the translation table, located in the 'keyboards' file in the 'dm' account. The format of the keyboard item is described in the REF documentation 'keyboards'. If the item has already been compiled into a translation table and stored as a binary item in the dictionary of the file, the translation item is not re-compiled, unless the "c" option is used.

"time.out" Value of the time out, expressed in milliseconds, after which an incomplete input sequence is aborted. If not specified, the value defined in the item "item.id" by the 'timeout value' modifier in attribute one, is used. If none is specified, the default value is 100 milliseconds. The "time.out" should be adjusted to the baud rate and possible special conditions, like network delays, to detect sequences of characters properly. If "time.out" is 0, the translation mechanism will wait indefinitely between characters of a sequence until either a valid sequence is received or until an unexpected character 'breaks' a sequence. The timeout does not apply to output translation. See the section "warning" below for information about adjusting the timeout value.

If the one-to-one input/output translation defined by the TCL command 'set-iomap' is also active on this port, the translation defined by 'set-imap' is processed first, and each character of the resulting string is passed through the one-to-one conversion table set by 'set-iomap'.

When used on PC-based AP/SCO or AP/Native, the keyboard setting defined by 'set-kbrd' is processed first.

set-imap can be called automatically by the TCL command TERM with the (K) option, if the item defining the terminal has the name of a keyboard translation item in the value 4 of attribute 1.
Syntax set-imap {item.id {time.out}} {(options}
Options port.number Port number in decimal. If not specified, the current port is used.

c Compiles the item. This option must be used when "item.id" is modified. "item.id" is compiled into a binary item and stored in the dictionary of the "keyboards" file.

v "verbose". Displays information about the translation table (name, size) and the modifiers used in attribute one.
Example
set-imap ibm3151
  Sets the keyboard translation to an ibm3151.

set-imap wy-50 100 (24
  Sets the keyboard translation to a wy-50, changing the timeout to 100 ms, on 
port 24.

set-imap att605 (c
  Sets the keyboard translation to an att605, recompiling the item first.

set-imap
  Disables the keyboard input translation.
Purpose
Related tcl.set-iomap
tcl.set-kbrd
filename.keyboards
tcl.term
filename.iomap-file
tcl.term-type
tcl.define-up

general.hot.backup

Command general.hot.backup Definition/General
Applicable release versions: AP 6.1, AP/Unix
Category General (155)
Description describes a 'hot backup' configuration, where one machine is in a standby mode, ready to take over the load from a failing system.


Introduction

It is often required to have a system configuration where down time due to a hardware or software failure cannot be tolerated, or must be reduced to a very short time. A solution more affordable than fault tolerance is to double all necessary hardware resources and maintain the data base on two normal systems. This document examines the issues involved in this 'hot backup' configuration, its advantages, its limitations and the system administration procedures.


'Hot Backup' Solution Overview

The 'hot backup' configuration involves two systems: one 'master' system, which is the system in operation, and a 'slave' system, which is in stand-by mode. Both machines are connected by a fast TCP/IP connection. Users are normally connected to the main system. The backup system is also booted, and has a copy of the data base on the main system. The two machines do not need to be absolutely identical: the backup machine just needs the necessary resources (disk, memory, connectivity, ...) to support the application(s).
During normal operations, all updates to the data base on the main system are applied to the backup system, over the network.
In case of a failure of the main system, the users are switched to the backup machine, and the application is restarted. The down time is limited to the switch over time (may be just the time for the terminal concentrators to establish an ethernet connection to the other machine), and the data loss limited to the updates not yet transmitted to the backup machine. This loss is usually limited to a few seconds worth of work.
Note that the backup machine is not necessary idle. Other applications can be loaded on the backup machine. Also, since the backup machine has an exact copy of the data base, it can be used for editing reports, doing the file saves, etc...


Advantages

- The cost is less than a traditional fault tolerant solution, when the absolute fault tolerance is not required. The second machine does not need to be as powerful as the main system. A slightly slower machine can be used, as long as it can provide an acceptable level of service should the main system become unavailable.

- The backup system is not necessarily idle. As long as the main data base is not updated on the backup machine, it can be used to edit reports, do the file saves, which alleviates the needs of saving the main system, can be used for developments, etc...

- The machines do not have to be physically close to each other. The machines can be in two different locations, which provides protection against major accidents.

- Since the updates are applied at the logical level, as opposed to mirroring of the data on disks, by a system process which is different from the application process used on the main machine, operating system failures are less likely to create corruptions on the backup system.

- The slave system can be the backup of more than one master system. A slave system with a very large disk capacity can act as a on-line archive system for several applications.


Disadvantages

- On AP 6.1, the amount of data loss in case of system failure is uncontrolled. If the network bandwidth is sufficient, the amount of lost data will be 'small', but unknown. This can create problems on some applications. This problem is corrected on AP versions 6.2 and later.

- The system administration and recovery procedures require some manual interventions. The system relies heavily on Unix networking which must be understood by the System Administrator.


Main System Failure Recovery

This section outlines the operation required to recover from a failure of the main system. After the failure occurred, the users have been switched to the backup system and the application restarted. The main machine is repaired and must now be set back to the same level as the backup machine.

While the main machine is down, the data base on the backup machine is naturally evolving. To record all changes on the backup machine, all updates are recorded, using the transaction logger mechanism. If the repair time of the main machine is expected to be short (a few hours), the transaction journal can be left on the disk. If the repair time is expected to be longer, it is probably better and safer to write the transactions on tape.

Assuming the main machine's data base has been completeley destroyed, following a multiple disk crash, re-synchronizing the main machine 'simply' involves doing a full save on the backup machine, and restoring it on the main machine, and switching the users back to the main system. The problem is that the file save and restore operation can be very long, taking potentially days. It would obviously be unacceptable to stop the operations during this. Therefore, while the save and restore proceeds, updates to the data base must be logged. On version 6.1 and later, the updates can be stored to tape, since multi-tape is supported. On earlier versions, there is no choice but to do the logging on disk. After the restore has been completed on the main machine, the transactions which have been accumulated during the save/restore operation, are applied to the main data base. During this transaction log load, it is likely that more updates will be done on the backup machine, resulting in more transaction tapes. Depending on the volume of data, there may be a few iterations of this process: load a transaction log tape on the main system while more transaction tape are being created on the backup machine. Eventually, the system will be almost in sync. The users are then disconnected from the backup machine, the very last transactions written on the last tape, and this tape is loaded on the main system. All operations must stop for this short time. Both systems are now in sync. Users can be reconnected to the main machine, and the transaction log across the network can be restarted from the main machine to the backup, and the system is now operational again.

If there is enough disk space on the backup machine, and if the down time of the main system (including the file save/restore) is expected to be 'small', it is possible to leave all the updates on disk. Re-synchronizing the two machines is then simpler: After the restore, start the hot backup process across the network from the BACKUP machine, which now acts as the 'master', TO the MAIN machine, now acting as the 'slave'. This will transfer all the updates made to the backup machine. When the queue is emptied, the users can be switched back to the main machine. This avoid tape manipulation, but involves a higher risk factor, should a major problem occur on the backup system.


Backup System Failure Recovery

If the backup system fails, a procedure similar to the one described for the main system recovery must be applied. The only difference is that the users are never stopped. Essentially, a full save is taken out of the main machine, restored on the backup machines, then all the updates applied to the backup machines. The only impact on normal operations are a higher system load due to the file save, and, obviously, a higher risk, since there is no backup.


Making sure it works

This configuration is usually applied to very large data bases, and making sure everything works and that no data loss occurs is of utmost importance. Network reliability is obviously critical. The various processes (servers) involved in the communication constantly check on each others, assign numbers to the messages on the network and also make sure the transaction logging mechanism itself is operating normally by periodically writing some test data and making sure the updates are sent over. The System Administrator can control the data bases by periodically running some application report, and make sure the results are identical. All network incidents, as well as unusual circumstances, are reported to a predetermined list of users, so that an incident does not stay unnoticed for a long time. The section "hot-backup, TCL", is describing the major system incidents and suggests some corrective actions.


System Setup :

To set up a 'hot backup' system, the System Administrator must do the following steps. Each operation is detailed in the section "hot-backup, TCL".

- Establish a network between the two systems. This network must support TCP/IP (eg Ethernet, Token Ring, etc...). The System Administrator must set the network names of both systems, even though only the receiver's host name is used. The hot backup connection only requires access to TCP. Other elements like NFS, FTP, etc... are not required.

- Determine a free TCP/IP port number. Use the "netstat -a" Unix command to see what is currently in use. A value like 2000 or 3000 is usually safe.

- Load the Pick data base on the main machine. This will include setting the application, the user files, etc...

- On the master system determine which files are going to be set as DL, i.e., for which updates will be sent to the backup machines. It is generally not advised to set the system so that all updates to all files be sent to the backup system. This has the side effect of also mirroring system files. It is better to exclude the 'dm' account from the transaction log. Use the "set-dptr" TCL command to do change the attributes of files and/or accounts.

- Do a save of the main machine, and restore it on the backup system. This can also be done using the network, as detailed in a Advanced Pick Reference Manual section "network save/restore, General". Else, the save/restore can be done on tape.

- Setup the servers on both systems (see the section "Server Setup" in the Advanced Pick Reference Manual documentation "hot-backup, TCL").

- Start the master and slave servers.
Syntax
Options
Example
Purpose
Related general.tape-socket
general.network.save/restore
tcl.hot-backup

pxp.intro

Command pxp.intro Definition/System Architecture
Applicable release versions: AP/Unix
Category System Architecture (8)
Description consists of an account which allows communications to another Advanced Pick System, using either Unix communication tools, such as TCP/IP, X25, serial lines, pipes, or Advanced Pick send and get features.

Basic functions are:

- Item transfers. A TCL utility 'ppcp' (for Pick to Pick CoPy) allows copying items to another Pick virtual machine, either locally or across a network.

- Message facility. Sending Pick messages to a user logged on a distant system.

- Remote execution. Possibility of submitting a command to be executed asynchronously on a distant system.

- File mirroring. Executing all file updates and optionally deletes to a remote 'mirror' file over the network.

All function are available through the "pxpcmd" TCL command. System administration is done by logging to the account "pxp".

Fundamental Notions:

"Local System"
The local system is the virtual machine on which the user is currently logged on. On a given hardware system, there may be several Pick virtual machines, and, therefore, several "local" systems.

"Host"
A host is an entity known to the local system as a potential destination. Care must be taken when defining a host over a network, since the name of the Pick host may be different from the name of the Unix host (as defined by the network).

"Service Access Point" or "SAP"
A service access point is an entity providing a communication service, such as a serial line, an Ethernet controller, or a Unix pipe. It is assumed that the SAP provides, at least, a transport level service (TSAP). This should be, at minimum, a network level like X.25-3 or a transport level like TCP/IP communication level. Each host defined on the local system has an output access point associated with it, so that the PXP subsystem can determine how to get the message on the network.

"Routing"
The PXP subsystem will do routing, i.e., re-direct on ch messages to the appropriate destination host, inside a multiple node network. Note this routing should not be confused with the routing provided by the Service Access Point (SAP) which simply make sure a message gets to the appropriate node on the network.
Syntax
Options
Example
Purpose
Related pxp.sap
tcl.pxpcmd
local.system
pxp.host

access.selection.processor

Command access.selection.processor Definition/General
Applicable release versions: AP, R83
Category General (155)
Description responsible for presenting items to the LIST Output processor based on processing the selection criteria.
Syntax
Options
Example
Purpose
Related access.sellist
access.sselect
access.select
list.processor
access.selection.criteria
access.verbs

sdb

Command sdb Definition/Unix
Applicable release versions:
Category Unix (24)
Description tool for recovery in case of a system crash, following, for instance, a power failure.

The Monitor Debugger allows:

- Display and change Pick virtual memory.
- Display and change 'real' memory (remember that 'real' memory is, in fact, Unix virtual memory).
- Force a flush of the Pick memory space back to disk.
- Display, change the status of the system semaphores, to remove a dead lock situation.
- Get access to a locked system.
- Terminate processes, including doing a shutdown.
- Trace modifications to a memory area.
- Put low level break points in the virtual code.

IMPORTANT

When the Monitor Debugger is entered unexpectedly, try first to type g<return> to see if the system restarts. If not, hit the <BREAK> key again to examine the problem. There are cases when the debugger is entered wrongfully. For instance, when some specially long, un-breakable, tape operations (like a rewind) are running, hitting the break key several times on line 0 may enter the Monitor Debugger with a 'tight loop' condition, which means the process was engaged in a 'long' operation, preventing it from servicing the <BREAK> key.

Entering the Debugger

The debugger can be entered:

- Voluntarily, by hitting the <BREAK> key on a Pick process which has been started with the -D option, to enable the Monitor Debugger.
- Voluntarily, by setting a monitor trace on real or virtual memory.
- Voluntarily, by setting a monitor break point in virtual memory.
- Following a system abort. When a serious system abort occurs, the debugger is entered. The user cannot continue from such a condition.
- Following a Monitor HALT. When a process cannot continue execution, a HALT is executed on this process. This normally does not affect the other processes. The faulty process enters the Monitor debugger and waits. Type 'x' to display the hardware registers, note them to transmit them to Technical Support and type 'g' to try restarting the process. If the process aborts again, try a logoff and/or reset-user from another terminal before trying 'g'. If it fails again, type 'q'.
- By hitting the <BREAK> key 5 times in less than 5 or 6 seconds on the line 0 when the system does not respond (system stuck in a tight loop, in a semaphore dead lock or line 0 comatized). When a semaphore is left hanging, or when the processor enters a short tight loop, due to an ABS corruption, for example, the process does not respond any more and is incapable of going to the Virtual Debugger. The fifth time the <BREAK> key is pressed, the signal handler checks to see if the first occurrence of the break was serviced normally. If it is not, the debugger is entered. On a busy system, it might be necessary to try the <BREAK> sequence several times to get to the Monitor Debugger.
- By hitting the <BREAK> key on the line 0 when it waits for a system lock (overflow lock, spooler lock, etc...) for more than approximately 5 seconds.

The different causes of entry in the debugger are displayed by a message on entry and a special prompt, as defined in the table below:

CONDITION MESSAGE PROMPT

Break key on line
started with -D <BRK> B!
System abort <ABT> A!
Monitor trace <TRC> addr C!
Break point <BPT> bp# I! Monitor HALT <HLT> code H!
Break key on line 0
on tight loop <TLP> T!
Break key on line 0
on virtual lock <VLK> V!

Referencing Data

Data can be referenced from the Monitor Debugger either in the virtual space or in the real memory space.

Data Specifications

Data location is defined by the following format:
address{;window}

The data is always displayed in hexadecimal.

Virtual Address Specification

The address of a virtual element can be represented by:
[r reg|{.}fid][.|,]disp

The base FID is either the content of the register reg or a FID number fid in decimal or in hexadecimal, prefixed by a dot. The displacement is either expressed in decimal, prefixed by a comma, or in hexadecimal, prefixed by a dot.

For example:


1.300 Offset x'300' in frame 1.
.12,16 Offset 16 in frame x'12'.
r3.100 Offset x'100' off the location pointer at by register 3.


Monitor Address Specification

The address of a Monitor element can be represented by either of the two following forms:

{ [l|g] }.hexaddress{ [+|-] {.}offset}
/symbol{ [+|-] {.}offset}

The l prefix is used for local data. The g is used for global data.

The second form requires the presence of the file sdb.sym on the current directory or on /usr/lib/pick. This file is normally not shipped with the system. It is reserved for development purpose.

The optional offset which is added to, or subtracted from, the n base address is either expressed in decimal, or in hexadecimal if prefixed by a dot.

For example:

.40000100 Absolute address.
l.0 First address in the local data space.
g.100+.10 Offset x'10' off the address x'100' in global data space.
/sys.time Address of symbol sys.time.
/tcb0+.100 Offset +x'100' off the symbol tcb0.


Window Specification

The window specifies the number of bytes to display. The window is expressed in decimal or in hexadecimal, prefixed by a dot. The default window size is 4. When using a symbolic name, the window is set automatically.

Changing Data

When a window of data is displayed, it is followed by an equal sign = . Hitting Ctrl-N will display the next window, if available, and Ctrl-P the previous one. New data can then be entered as follows:


'char Character Insertion. A character string is preceded by a single quote. The characters in the display window are replaced by those in the input string, beginning from the left.

.hex Hexadecimal string insertion. A hexadecimal string is preceded by a dot. It must contain only hexadecimal characters and an even number of nibbles. The characters in the display window are replaced by those in the input string, beginning from the left.

{+|-}int Integer. The display window is treated as a numeric element. The window must be 1, 2 or 4 byte long. The new integer replaces all data in the window.


Debugger Commands

The debugger prompts for a command with a one character code followed by an exclamation mark (!). Commands are terminated by a carriage return.


!shell Submit a Shell command. The command is submitted to shell.

? Display help information. The Unix file /usr/lib/pick/sdb.help is displayed using pg.

ba{.}fid[.|,]disp
bao{.}offset
ba+{.}offset
b{n} Add a break point. The effective address can be specified in different ways:

Form 1: The effective address is computed by adding the argument fid to the fid break point offset, and disp to the displacement break point offset defined by the bo command.

Form 2: The effective address is specified by the the fid be defined by in break point offset and offset added to the break point offset displacement set by the bo command.

Form 3: The effective address is defined by the current value of R1 to which the offset offset is added.

Form 4: The effective address is defined by the current value of R1 to which the n * 4 is added. If n is not specified, 1 is assumed. This form is used for architectures which have a fixed 4 byte instruction size (RISC).

The address must be at a virtual address boundary. Break points are global for the whole virtual machine. Once the break point is set, any process which hits it will stop. Break points are removed once they are encountered. A break point should not be set into an ABS frame which has been write required (mloaded into). Up to three break points can be set simultaneously. Upon successful setting, a '+' is displayed and the break point is displayed. Break points remain in action until they are explicitly removed or until the virtual machine is shut down.

bd[*|n] Delete break points. If * is used, all breakpoints are removed. If n from 0 to 2 is used, the specified break point is deleted.

bl List break points. List the Monitor break points.

bo{.}fid[.|,]disp Break point offset. Define a fid and displacement which are used in computing the effective address of a break point in the ba command.

d? Show/Change default dump string. In case of system abort (bus error, segmentation violation, ...), the system automatically dumps some critical elements in the file "/usr/tmp/ap.core". This file can be examined with the apcrash utility. After the display, the string can be changed by typing the new dump string after the '=' sign. The dump string can have up to 15 characters, one-character codes and arguments. See the Monitor Debugger command d below for the description of each code. The dump string can be different for each process. See the Advanced Pick Reference Manual documentation for the description of 'apcrash'.

dcommand.string Dump Pick core memory to the Unix file /usr/tmp/ap.core. The result of the dump can be examined with the utility apcrash. This command can be used, after an incident, to dump selected elements of the current Pick memory to be able to investigate the problem. See the Advanced Pick Reference Manual documentation for the description of 'apcrash'. The Unix file can be dumped to tape using Unix utility like tar (eg. tar cv /usr/tmp/ap.core). The content of the dump is controlled by command.string which is composed of one character codes, some followed by arguments. Codes can be separated by commas for readability. The order of the codes in command.string is unimportant. This dump utility is automatically invoked in case of system abort (bus error, segmentation violation, etc...) before entering the Monitor debugger. What is dumped in this case is controlled by a default dump string (see the Monitor Debugger command d? above). Valid codes are:

a : Dump 'all'. This option is equivalent to the command string "l,g,b,p,r,c,0,f1". See the description of each code below.

0 : Dump the PCB of the current process. If the PCB is not attached at this point, this performs no operation.

b : Dump the buffer table.

c : Dump the current process context. All the frames currently attached to the process, their forward and backward links are dumped.

f{.}n : Dump the fid n. If the specified frame is not in memory, it is read from disk.

g : Dump the global space.

l : Dump the local (private) memory, not including the stack.

p : Dump the pibs.

r : Dump the hardware registers. The registers are dumped in the same order as described in the Monitor debugger 'x' command described later in this appendix.

s{.}start;[*|{.}size] : Dump the main shared memory segment. Start is the starting offset, expressed in bytes and size is the size expressed in KILOBYTES. If * is used instead of size, the entire shared memory segment, starting at the specified offset, is dumped.

v{.}n : Dump n virtual buffers.
e Toggle the debugger ON/OFF. When OFF, prevent the entry to the debugger with the <BREAK> key. On line 0, though, the debugger will be entered in some special cases (See section 'Entering the Debugger' above) even when the debugger is disabled.

f{!} : Flush memory. All frames modified in memory are written back to disk. If a disk error occurs, a minus sign is displayed. If the '!' option is specified,all frames in memory, even if they are not write required, are written to disk.

g {fid.disp} : Go. Without any argument, the process resumes execution. If fid.disp is specified, control is transferred to the specified mode. fid is expressed as a relative offset in the current abs.

gl{-} : This command displays or removes group locks. With no options, the monitor prints the status of the global group lock (with a "G+" or a "G-"), and scans memory for any frames which are marked as locked. For each locked frame, the monitor displays the fid, the address of the buffer table entry, and all group lock information held in the frame itself. To clear all group locks, type "gl-". To clear a specific group lock, type "gl-{fid}". Note that locks cleared from the monitor debugger may still display with the "list-locks" command. Such locks should be cleared with a "clear-locks" command when the virtual machine becomes accessable. In general, group locks should always be cleared by TCL commands only. The monitor debugger should only be used when system access is denied due to a lock set on a critical file (like the "mds,," file).

h fid : Hash Fid. This command displays internal information about the specified FID if it is in memory, or the message <NIM> if the fid is not in memory. The content of the buffer table can be altered. Input is terminated by:

carriage return : return to debugger.
^N : Next buffer table entry, following the age queue forward link.
^P : Previous buffer table entry, following the age queue backward link.
^F : Next buffer table entry, following the hash queue forward link.

k{w}[f| pib ] : Kill. Terminate the process associated to the PIB pib or the flusher if used with the key f by sending a SIGTERM to it. The w key waits up to 10 seconds for the process to terminate. If it does not terminate, a SIGKILL is sent to it. Note that if the target process is in the Monitor Debugger or stuck on a semaphore, the SIGTERM signal will have no effect until it leaves the Monitor debugger or the semaphore is released. Kill the flusher ('k{w}f') will unconditionally log all processes off and shut down the virtual machine.

l fid : Display/Modify Link fields. Displays in hexadecimal the link fields of the frame fid, in the following format:

nncf:frmn:frmp:npcf:clnk=

nncf: number of next contiguous frame(s)
frmn: forward link
frmp: backward link
npcf: number of prior contiguous frame(s)
clnk: core link

New values for the fields can then be entered, separated by commas, with an empty field to leave a field untouched.

m {*}monitor.address{;window} : Display/Change Real Memory. Display the specified window at the real address as specified (see the section about Monitor address specification above in this section). If an asterisk (*) is used, the address is considered as a pointer and its content is used as the monitor address. The length and window specification applies to the area pointed at by the pointer.

IMPORTANT: Access to an illegal address will cause a segmentation violation or a bus error sending control back to the Monitor Debugger with an abort condition, from which it is impossible to recover. It is strongly advised to avoid absolute addresses, since they vary from implementation to implementation.

p {pib}{[.|,]offset}{;window} Display/Change PIB. Display window bytes in the pib specified by pib (current pib if pib is omitted), at the optional offset offset.

q{!} : Quit. Quit Monitor debugger. Confirmation is asked. Leaving the Monitor Debugger terminates the Pick process. When asked to confirm, the user must type y (no return). The optional '!' by-passes the normal Pick termination, and terminates the process abruptly. This forms should be used only in extreme situations where even quitting from the Monitor Debugger aborts.

r reg{.disp}{;window} : Display data through register. Displays data pointed at by the register reg from 0 through 15. If specified, disp is added to the register displacement.

c [*|sem]{[?|+|-]} Display/Change semaphore status. Display or change the semaphore specified by sem, expressed from 1 through 3, or all semaphores if the key * is used. The key + sets (locks) the specified semaphore. The key - resets (unlock) the specified semaphore. The key ? displays the information as in the example below:

00: O pid=0985
01: O pib=0023 W
02:
03: O pib=001A

where semaphore 0 is owned by the process with the pid number of x'0985' - Only semaphore 0 is displayed with the Unix pid number instead of the Pick pib number; semaphore 1 is busy, owned by the pib x'23' and has at least one process waiting on it (W); semaphore 2 is free; semaphore 3 is busy, owned by process x'1A' but has no process waiting on it. If the owner pib is not setup yet when the command was executed, the "pib=?" is displayed. Re-enter the command again to see to owner pib.

S{f}{h}{i}{m}{s}{w}{-} Scan buffer table bits. This command displays and/or clears buffer table bits depending upon certain criteria. The options are as follows:

f Referenced bit
h Hold bit
i iobusy bit (disk read)
m Temporary mlock bit
s Suppress detail output. Show only total.
w Write-required bit
- Clear instead of display

The user specifies which bits to search for using the above options. When a buffer is found, it is displayed in a manner similar to that of the "h" command. The user may go backwards or forwards in the selection list with the CTL-P and CTL-N commands. At the completion, the total count of items is indicated. Note that the count is only accurate if forward movement only is used.

t{[mmonitor.address|fid.disp]{;window}} Set/remove Monitor trace. Without any argument, any pending monitor trace is removed. A minus sign is displayed in acknowledgment of the removal. Else, set a trace on the specified area of memory starting at monitor.address or the area of memory associated to fid.disp with a length equal to window. If no window is specified, the default window or the size of the monitor element is used. The maximum window size on a monitor address is 32767. The maximum window on a virtual address is the frame size. A plus sign is displayed in acknowledgment of the setting. The memory is checked for any change at every virtual branch or call, and every frame fault. If the memory is changed, the Monitor debugger is entered. When setting a trace on a virtual address, the frame is locked in memory. Removing the trace unlocks the frame if it was not locked when the trace was set.

v{code} Enter the Virtual Debugger with a code 'code'. If 'code' is not specified, it just enter the debugger as if the <BREAK> key had been hit. 'code=14' will log off the process. This command will display 'ADDR' and fail if the Virtual Debugger PCB is not set up.

x Display hardware registers. Display on the first line the program counter, followed by a variable number of 32 bit registers. The information is implementation dependant:


AIX: Registers r3 through r31.
SCO: Registers edi esi ebp esp ebx edx ecx eax
HP-UX: Registers r2 through r30
SINIX (MIPS): Registers r1 through r30
SVS (ICL DRS6000): Registers %pc %npc %g1 %o %l %i


y{!} Toggle Lock By-Pass. When ON, the process under debugger will by-pass all locks in the system, monitor and virtual. This option should be used very carefully, since it can create extensive damage if used on a live system. To be used safely, all other processes should be either logged off or stopped by setting a semaphore (see section 'Usage Hints' below). Unless used with the key !, the user has to confirm the activation of this by-pass.


Usage Hints

This section shows how to use the Monitor Debugger to perform some unusual actions. Extreme care must be exercised when using the debugger to remove a lock or a semaphore. This may cause data loss. It is strongly recommended to contact Technical Support when a system gets locked.


- Running in single user. To prevent all users from running, except one terminal:

At shell, activate a Pick process in the Debugger.
ap -D <return>
Once in the Monitor Debugger:
B! s1+ <return>
B! y <return> and confirm by typing 'y'
B! g <return>

All processes will now be locked, except the one running under the debugger and the flusher. This is useful when patching some critical virtual structures, like the Overflow table.

To restart the multiuser activity:

Break into the Monitor Debugger (either on the line which has the lock set, or on line 0 by hitting <BREAK> twice, since the line 0 should be locked by the semaphore).

This enters the Monitor Debugger.
B! y <return> to toggle the by-pass off
B! s1- <return>
B! g <return>

All processes will now be unlocked.

- Removing a dead lock. When a process has been killed by the system, it may have left a semaphore or a virtual lock behind. To remove the lock, make sure all users are inactive, and do the following:

On the line 0, hit <BREAK> up to six times, in less than 5 or 6 seconds. The process should drop into the Monitor Debugger, with the message <TLP> in case of a tight loop, or <VLK> in case of a virtual lock. Do the following, depending on the case:

T! f <return>
T! s*? <return>
Note which semaphore number is set and remove it by the command:
T! s semnum - <return>
T! g <return>

If the lock is a virtual lock, it may have to be cleared. This may have disastrous results if done without some inside information. It is strongly advised to contact technical support.
V! f <return>
V! r15;2 <return>
The system will display a number. If non zero, zero it. Else, there is another problem. Contact technical support.
V! r15;2 .001A= 0 <return>
V! g <return>


A virtual lock might also be one of the system wide locks. Do the following to identify it:
V! f <return>
V! 1.100;2 <return>
The system will display a number, normally 0. Type <ctrl> N 10 times or until a non zero value shows. If no null value shows, there may be another problem. Contact technical support.
V! 1.100 .0000= Ctrl N
V! 1.102 .001A= 0 <return>
V! g <return>


A virtual lock might also be an item lock (6.1 and above only). Do the following to identify it:
V! 1.15a;6 <return> <return>
The system should display a frame number. Now type the following:
V! .(frame number displayed before).0;4 <return>
The system displays an 8 digit hex number. The first 4 digits indicate the global lock, while the next 4 digits indicate the number of item locks. To zero both of these, type "0" at the "=" prompt followed by a <return>.


A virtual lock might also be a group lock (6.1 and above only). Do the following to identify it:
V! gl <return>
If the system prints anything other than "G-" then group locks are set. To clear them, type the following:
V! gl- <return>
V! gl <return>
The system should report "G-" after the last command. Note that the virtual machine may still report locks set if the "list-locks" command is used. To fix this, go back to tcl as follows:
V! g <return>
If the system does not go to a tcl prompt, then some other lock condition exists and the user should contact technical support. If a tcl prompt appears, type the following:
: clear-locks (g <return>
Allow some time for "clear-locks" to complete as it can be delayed by processes which have been terminated abnormally. After completion, the group locks should be cleared. If any further deadlocks are encountered, contact technical support.


If only the line 0 is stuck, it might be because it has been accidentally comatized. The WHERE command shows the first two characters of the status field as FE or 7E. A logoff from another port will un-comatize the line 0, or do the following:
T! p.0;1 <return>
T! p.0;1 .7E= .ff <return>
T! g <return>


- Enabling the debugger on line 0. If the line 0 has been started without the -D option, it is impossible to get in the Monitor debugger, unless there is a lock. To enable the debugger, proceed as follows, to set temporarily a virtual lock to be able to drop in the debugger, enable it and restart:
At TCL:
: debug <return>
! 1.102;2 <return>
! 1.102;2= -1 <return> This sets the overflow lock
! g <return>.
Line 0 (and the WHOLE system) is now locked. Wait 5 seconds and hit the <BREAK> key. This drops in the debugger.
V! 1.102;2 <return>
V! 1.102;2 .FFFF= 0 <return>
V! e <return>
V! g <return>
The Monitor Debugger is now enabled on line 0.


- Finding your line number. To determine the PIB number on which the debugger is running, do the following:
Break into the debugger.
B! p.18;2 <return>
This displays in hexadecimal the pib number +1


- Restarting the virtual machine after an abort early in the boot stage. If the virtual machine boot aborts very early (during or right after the 'Diagnostic' message), after having corrected the error when possible, the Boot can be restarted quickly by doing (the line 0 must have been started with the '-D' option):
! Hit the <BREAK> key to enter the Monitor debugger
B! g3.0 <return>
This should redisplay the message 'Diagnostic ...' and proceed.

Precautions

A process started with the -D option has some special privileges, which might lead to data destruction if used indiscriminately:

- Access to the virtual machine will always be granted, even if the Initialization lock is set. In particular, it would be possible to start a user process while the line 0 is in the process of initializing the virtual machine. Therefore, always make sure that the line 0 has reached, at least, the 'Diagnostics...' stage before starting a process.

- A process with debugger privilege may by-pass all locks, including virtual locks! When changing data structures, be sure that nobody is accessing the virtual machine, or, better, set a semaphore, as shown above, to prevent concurrent access. For the same reason, use debugger only on one line at a time.

- When the memory is full, the debugger can abort with the message 'MEM FULL'. Do a flush and retry until it succeeds.

- When the system runs in 'single user' (with a lock set by the command s1+) and with the lock by-pass activated, do not try to shut down the system with the TCL command shutdown. Instead, go to the monitor debugger, do a flush (f) and kill the flush process (kf). This terminates all active processes.

- When changing virtual memory with the debugger, it is a good idea to flush memory frequently, using the f command.
Syntax
Options
Example
Purpose
Related

Pickto.ap.3

Command Pickto.ap.3 Article/Article
Applicable release versions: AP
Category Article (24)
Description identifies differences between R83 and AP

Contributed by Ron Davis
(Original article ran in PickWorld Magazine)

The final installment of this series considers the fine points of moving an Advanced Pick System to a different Advanced Pick platform.

Differences Between R83 and Advanced Pick

Porting applications from one platform to another is a task which has brought a cold sweat to the brow of more than one software developer. Pick Systems has made Advanced Pick as hardware-independent as possible, but a few topics still remain. There are some minor differences between Advanced Pick platforms which must be considered.

Advanced Pick-To-Advanced Pick Considerations

In porting Advanced Pick applications to other sites and platforms, or in adding software to existing sites, you may wish to consider the following:

Operating System

* Different platforms may not support certain Host Operating System commands. If you use the '!' or 'shell' commands to execute Host OS functions, you may need to change them.

File System

* The frame size may be different (which will affect performance), unless the files are re-sized.

* Files may be designated as case sensitive, which may pose a problem.

Files

* There are several files and items in the 'dm' account which can affect system operation:

'dm > abs' (Custom abs files)
'dm > bp' (Pick/BASIC programs)
'dm > users' (User logon / password / permissions file)
'dm > devices' (Terminal / printer devices in the system)
'dm > fonts' (Printer fonts file)
'dm > iomap-file' (Keyboard definition)
'dm > kb.pc' (Keyboard definition)
'dm > kb.fk' (Function keys)
'dm > messages seq' (The method of sorting for the 'ms' correlative)
'dm > messages legend' (The legend printed at the bottom of reports)

Access / TCL

* There are a few verbs which affect system operation:

case, set-break, set-esc, brk-level, esc-level, legend
tcl-hdr

* There are several commands which are specific to the Hosted Unix platform. If you are changing to another platform, then those commands may work differently, or will be inoperative.

Some of those commands are:

!, .profile, add-font, alarm, cal, cai, cc, cd, config, cpio, disc, ecc, env, environ, exit, export, fuser, grep, import, kill, list-device, list-lines, listbi, ll, ls, pg, phantom, pick, pid, ppcp, ps, psh, psr, pwd, pxpcmd, pxpw, reset.port.unix, rmbi, set-8mm, set-batchdly, set-break, set-cmcm, set-device, set-esc, set-flush, set-func, sh, shell, shpstat, startshp, stty, su, trap, tty, unix, useralarm, vi

Pick/BASIC

* Different hosted platforms may not support certain Host Operating System commands. If you use the '!' or 'shell' commands within Pick/BASIC 'EXECUTE' statements to execute Host OS functions, you may need to change them.

* Hosted systems have the additional capability of extending Pick/BASIC by adding built-in C functions, using the 'addbi' command. You may have to port over those custom C functions.

* Basic's substring replacement feature has changed as of March 1992. The '[ ]' substring operator now works differently, although your programs will still compile.

substring = string[ start.position, no.of.chars ]

New Method:

If start.position < 1, the system will use "1".
If start.position > len( string ),the system will return an empty string.
If no.of.chars < 1, the system will return an empty string.
If no.of.chars > len( string ), it returns the remaining portion of the string.

string[ start.position, no.of.chars ] = substring

New Method:

If start.position < 1, the system will insert the absolute value of start.position spaces in front of the string.
If start.position = 0, the system will use "1".
If start.position > len( string ),the system will append spaces at the end of the string so that the string will be start.position characters long, and then the substring will be appended.
If no.of.chars <= 0, the system will insert the substring prior to the first character in the string.
If no.of.chars < len( substring ),it copies only those characters.
If no.of.chars > len( substring ),it copies the entire substring, and appends spaces for the remaining characters.
If no.of.chars > len( string ),it appends the additional characters to the end of the string.

string[ delimiter, start.field, no.of.fields ] = substring

New Method:

If start.position < 1, the system will use "1".
If start.position > no.of.fields,the system will add null fields (separated by the appropriate delimiters) so that the string will contain the proper number of fields.
If no.of.fields < 0, all the fields after the start.fieldth field are deleted from the string, and the substring is inserted at that point.
If no.of.fields = 0, no fields are deleted, and the substring is inserted in the string before the start.fieldth field.
If ( start.field + no.of.fields ) < ( the number of fields in string ), the specified fields are replaced, and any remaining fields are nulled.
If ( start.field + no.of.fields ) > ( the number of fields in string ), the system adds null fields (separated by the appropriate delimiters) so that the string will contain the proper number of fields. Please check your programs for its use.

Text Editing

* (See the note on the 'dm > devices' file.)
* (See the note on the 'dm > iomap-file' file.)
* (See the note on the 'dm > kb.pc' file.)
* (See the note on the 'dm > kb.fk' file.)

Document Processing

* (See the note on the 'dm > devices' file.)
* (See the note on the 'dm > fonts' file.)

The Spooler

* Some platforms do not support the 'startshp' command.

Tape

* Some platforms do not support the 'set-device' command. You may need to replace it with:

set-floppy, set-sct, set-half, set-8mm

Or vice-versa...

Assembly Language

* User-Exits may change from time to time, and from platform to platform. It would be wise to double-check any User-Exits that you use in processing codes and/or Pick/BASIC programs...

* As Advanced Pick goes through different moniotor revisions, some assembly language will have to be re-assembled. Pick Systems has promised that AP will be binary-compatible across hardware, and it someday will be. Until then, reassemble your code.

* If you are using custom-written software from a vendor company, you may not have the source code available. You will have to contact them for a run-time version of their software for Advanced Pick. As you have read, the primary differences between R83 and Advanced Pick deal with the new concepts that have been added, and the conflicts that may arise between different implementations of Advanced Pick is mainly due to the platform-specific extensions available. I hope this road map helps you to avoid most of the pitfalls.

At last we have come to the end of the series, but by no means the end of this topic. There will be discussions in the future about upgrades, conversions, data transfers, and offloading. PickWorld welcomes your comments on and contributions to these issues.
Syntax
Options
Example
Purpose
Related tcl.set-break
tcl.set-esc

pib.status

Command pib.status Definition/General
Applicable release versions: R83, AP
Category General (155)
Description one of the pieces of data returned by the "where" command. The "pibstat" program is used to break the information down into the possible binary states.
Syntax
Options
Example
Purpose
Related tcl.pibstat
tcl.where

up.cut.paste

Command up.cut.paste Definition/Update Processor
Applicable release versions: AP
Category Update Processor (113)
Description move or copy data from one place in an item to another place in an item.

Text can be cut and deleted or cut with the original left in place (copied). After the text is cut, it is stored in the cut buffer, replacing the previous contents of the cut buffer.

Text in the cut buffer can be pasted into the current item or it can be pasted into a specified item in another file.

Text can also be pasted from a specified item and file.

See the (UP) "c" command for a more detailed explanation.

As of AP releases 5.2.5RS and higher, it is now possible to cut and paste when editing the TCL command stack item.
Syntax
Options
Example
Purpose
Related up.delete.text
up.c
up.z
up.cl
up.cd
up.cc
up.cp
up.cw
up.cr
up.zz
up
up.ci
up.co

tcl.set-iomap

Command tcl.set-iomap Verb: Access/TCL
Applicable release versions:
Category TCL (746)
Description allows translatable input and output for each port on the system.

"port.number" is the number of the port (or pib) for which a keyboard table is installed.

"id" is the four-character item-id of the user-defined keyboard item. It must be numeric, and is not required when used with any of the following combinations of options: "ir", "ri", "or" or "ro".

The Virtual software of Pick reads the keyboard item and passes it to the monitor. "id", the item-id, is copied to the ID field of the I/O table translate. The counter in the second field is incremented or decremented by one depending on the option. The character set is copied to the third field of the entry. Depending on the option specified by the user, either IT or OT pointer is replaced by the address of the entry. The new keyboard table overwrites the old one even if a match is found when comparing the new item-id with the one in the table.

Monitor notifies virtual if an installation is not possible and virtual, in turn, notifies users of such results.

There is a file in the "dm" account called "iomap-file". This file contains keyboard items that are defined by users. The item-id should be numeric and should be in the range of 0 <= item-id <= 2147483647. The keyboard item has 34 attributes and has the following format:

attr 0 : item-id

attr 1 : user comments

attr 2 to 33 : ASCII code of characters from 0 to 127 in ASCII hex format. For example: character 'A' has its ASCII code is x'41' and should be entered in the item as '41'. Each attribute from 2 to 33 has exactly 8 entries each. User can put his/her comments after the 8th entry on that attribute.

The following example shows how to translate the character 'A' to 'B':

- ASCII code of A is x'41' = d'65'

- ASCII code of B is x'42' = d'66'

- 65 mod 8 = 8 with remainder 1

- Translated character, which is B, should be entered at attribute 8+2= 10 and at column 1+1 as 42. The reason that 2 is added to 8 is because counting begins at 0 and attr 1 is used for comments.
Syntax set-iomap port.number{,id} ([i|o|ri|ro]
Options ? Provides on-line help

i Installs input keyboard.

o Installs output keyboard.

ir (or ri) Removes an installed input keyboard.

or (or ro) Removes an installed output keyboard.
Example
Purpose
Related tcl.set-imap
filename.iomap-file
filename.keyboards

tcl.t-att

Command tcl.t-att Verb: Access/Tape Commands
Applicable release versions: AP, R83
Category Tape Commands (36)
Description attaches the tape unit or floppy disk drive to the current process unit and optionally assigns the blocksize to the tape i/o buffer.

On Advanced Pick releases prior to 6.1.0, only one device at a time can be attached to the system. It is advised to use the "set-" commands, since they establish which type of device is being used, as well as the default blocksize.

"blocksize" is an integer number indicating the number of bytes in each block.

Floppy disks may be set at any number between 20 and 512, but are usually set to 500 or 512. The default is 500.

Half-inch tapes may be set to anything between 512 and 16384, inclusive. The default is 8192.

Streaming cartridge tapes (SCT) may be set to anything between 2048 and 16384 and must be a multiple of 512. The default is 16384.

8-millimeter tapes may be set to anything between 512 and 16384 and must be a multiple of 512.

After attaching the device to the current process, all of the regular "tape" handling verbs, like "t-rew" and "t-fwd", are available, even when using floppy diskettes. Diskettes must be "rewound" before writing to them or reading from them. Usually, this doesn't take long.

If the tape unit is attached to another line, the process displays the port that has it attached. The "u" option in the "t-det" command attaches the tape "unconditionally", regardless of what it may be doing. This may be necessary if the transaction logger is enabled. (see "t-det").

The "t-att" verb should be used before any tape manipulation process, such as executing tape control verbs, generating print file output to the tape using the "t" option in "sp-assign" or "sp-edit", executing tape reads and writes in Pick/BASIC, or generating tape output using the "reformat" and "sreformat" verb.

All tape manipulation processes on the system check for attachment, attach the tape if possible, generate the required message, and terminate if the tape is not available.

The implied "t-att" uses the current tape block size specification and remains "set" until one of the following events occur:

1) using "t-att" with a numeric argument.

2) using "t-att" without a numeric argument.

3) using any tape verb which checks for tape attachment.

4) executing the "t-rdlbl" verb when a labeled tape is mounted. The tape block size is stored in labels written to tape. By reading a tape label through "t-rdlbl" or "t-read", the current blocksize is changed to the size stored in the label read.
Syntax t-att {blocksize} {(options}
Options u Unconditionally attaches the tape. It is strongly advised to verify that the tape is not actually being used before stealing it from another process.

z Unconditionally attaches the tape, except if the tape is attached to the transaction logger.
Example
Purpose
Related basic.weof
basic.writet
tcl.t-rdlbl
tcl.sel-restore
tcl.set-floppy
tcl.set-device
proc.it
basic.readt
tcl.set-sct
access.t-dump
tcl.set-half
access.reformat
access.sreformat
tcl.set-floppy
tcl.t-status
basic.rewind
tcl.admin.tape
tcl.startlog
tcl.t-erase
tcl.t-status
tcl.sp-assign
tcl.fuser
tape.handling.verbs
tcl.set-8mm
basic.readtx
basic.readtl
basic.onerr
tcl.format
access.t-load
access.tape
start.buffer.mark
tcl.config.tape
tcl.t-select
dummy.restore
tcl.t-att.link
tcl.abs-dump

access.ss

Command access.ss Modifier/Access: Verbs
Applicable release versions: AP
Category Access: Verbs (152)
Description produces columnar and cross totals on rows of designated attributes within a given range of dates.


attr.name1 is the attribute-defining item containing a date which limits the tabulation. The output form of this attribute determines the column headings. The output form of "attr.name1" does not have to be the same as the values included in "beg.date" and "end.date". For example, "attr.name1" may display the month, while "beg.date" and "end.date" must be in the form "mm/dd/yy" enclosed in double quotes (").

beg.date is the beginning date range to be included. If not specified, "beg.date" is determined by the number of columns that will physically fit on the display or printout.

end.date is the ending date range to be included. If not specified, the current system date is used as the default.

attr.name2 is the attribute that contains the values to be columnar and cross (row) totaled.


If the width of the report exceeds the width of the output device, the extra columns are truncated.

If two or more "ss" connectives are used in the same Access sentence, a single report is generated, with columns for each subsequent "ss" connective following the totals column for the previous connective.

Column headings are created for each possible value produced by "attr.name1" within the "beg.date" and "end.date" range. Output-conversions are processed before producing the heading. Correlatives are not processed. The format of the column is determined by "attr.name2".

Each cell in the listing contains the total value of "attr.name2" for the date specified in that column heading.

"roll-on" can be used in conjunction with "ss" to produce subtotals by specified categories. Each rolled attribute value produces a row in the output. If no "roll-on" is specified, only a total line is produced.

The granuality of the date display is determined by the output-conversion of the "attr.name1". The date granuality is the value of the last multiply in the output-conversion. If the following output-conversion is used to provide week ending dates, the report will have a granuality of 7 days and the dates will be the Saturday after the date in the item. In both of the following examples, the date is stored in attribute 4.


a4/'7'*'7'+'6'
d2/


Whereas the following output-correlative makes a granuality of 3 days:


a4/'3'*'3'
d2/


The number of dates to display is based upon the display width of the "attr.name2" attribute, but the width of the columns on the report is based upon the greater of the "attr.name1" display width and the "attr.name2" display width.
Syntax ss attr.name1 {{beg.date} {end.date}} attr.name2 {(g}
Options g Suppresses the row and column totals.
Example
sort invoices with code "c" "d" and with amount ne 
"" by code by category ss quarter "01/01/92" 
"12/31/92" amount roll-on code roll-on category 
"'d'" det-supp id-supp

The previous example produces a report with subtotals for each "code" 
and totals to each category within the code for the quarters "1-92", 
"2-92", "3-92" and "4-92".

sort invoices by customer ss quarter "06/30/92" amount roll-on 
customer det-supp id-supp

This produces a report by customer for three quarters, "4-91", 
"1-92", and "2-92". The beginning date range is calculated 
based on the attribute's width, the output device's width, and the 
ending date of "06/30/92".
Purpose
Related access.attr.name
access.roll-on
access.introduction
spreadsheet.article

tcl.set-ovf-local

Command tcl.set-ovf-local Verb: Access/TCL
Applicable release versions: AP 6.1
Category TCL (746)
Description sets and displays the local overflow cache size. The presence of a local overflow cache enhances both performance and reliability, and is automatically set up to a reasonable default by the system. Users who wish to further tune their overflow usage may use the "set-ovf-local" command to change this cache to tailor it to specific needs.

When given without any options, the "set-ovf-local" command displays the cache status for the current line. The display shows a 2 by 2 grid of numbers. The explanation for these is as follows:

Legend: (see examples)

"Current" is the current number of frames actually held in a given cache.

"Max" is the maximum number of frames that the cache may hold. Any overflow released to the cache when it has reached its maximum size will be deposited directly into the main overflow table.

"WS" is the row showing the current and maximum workspace cache sizes.

"File" is the row showing the current and maximum file cache sizes.

Parameters:

Besides displaying the current cache status, "set-ovf-local" can be used to modify the current cache maximums by specifying the following numeric parameters on the command line:

ws.max This sets the maximum workspace cache size.

fs.max This sets the maximum filespace cache size.

Cache descriptions:

The workspace cache is a generalized cache used for virtually all memory needs. It is automatically set to a default so that simple TCL commands and FlashBASIC and Pick/BASIC programs will not need to access the global overflow table and can thus avoid the performance cost of doing so. If the user is repeatedly using EXECUTE's of more complex programs, then he/she can try boosting the workspace cache size to see if performance improves.

The file cache is used only when update-protection is active. The update-protection scheme deposits frames into this buffer which it guarentees are syncronized so that no other file or workspace on the disk points to it. This virtually eliminates the posibility of so-called "doubly linked frame" where two files are attached to the same frame after a power outage or crash. This cache is should be large enough to cover the largest group or largest pointer item that is protected by the update protection scheme. For example, if the user has a file with 30K byte items on a 2K frame system, then the file cache should be set to at least 30K/2K = 15 frames.

Cache tuning:

Ideally, a system should be able to operate out of private overflow caches as much as possible. The less the system accesses the global overflow pool, the greater the system performance, and the less chance there is for the overflow to become corrupted in the event of a power outage. To tune the cache sizes, users can try different cache settings, and run the "buffers (s" program to see the change on overflow access. The fields "WS OVF locks" and "FILE OVF locks" show the number of references to the global overflow table per second.
Syntax set-ovf-local {ws.max}{, fs.max} {(options}
Options s supresses the display.

f flushes all caches. All frames held within the current cache are released back to the main table. Note that this operation is automatically invoked when the maximum values are changed. The act of logging off the system will automatically flush all local overflow caches.

g Copies the current maximum cache settings into the global default. Whenever a new user logs in, he/she will automatically aquire this default value. Note that the global defaults are automatically preset to a factory specified value upon reboot, so if a permanent global change is desired, the user should place the "set-ovf-local" command in the system-coldstart macro.
Example
set-ovf-local
        
Private Overflow Cache Status:
            
           Current   Max
                   
WS            2      20
File         10      30

This shows that there are currently 2 workspace frames in the overflow cache 
with a maximum capacity of 20 frames, and 10 file space frames in the overflow 
cache with maximum capacity of 30 frames.

set-ovf-local 100,300 (s

This command will set the maximum workspace cache size for the current line to 
100, and the maximum file space cache size to 300.

set-ovf-local (fs

This command releases all frames in the current overflow cache to the global 
pool.

set-ovf-local 100,300 (sg

This command will set the maximum workspace cache size for the current line to 
100, and the maximum file space cache size to 300.  The settings will also be 
copied into the global default area.  All users logging into the system after 
this command has been executed will automatically get caches sized to this 
specification.
Purpose
Related tcl.init-ovf
tcl.rebuild-ovf
tcl.set-runaway-limit
tcl.set-ovf-reserve
pointer.item

tcl

Command tcl Introductory/TCL
Applicable release versions:
Category TCL (746)
Description The Pick System Terminal Control Language (TCL) is a system-level command language with system-defined or user-defined statements that can be executed individually or sequentially. System-defined statements are called TCL verbs. User-defined statements are: macros, menus and cataloged Pick/BASIC programs. The first word of a TCL statement must be either a system verb, macro, menu or cataloged Pick/BASIC program.
TCL commands can be typed in when the TCL prompt colon (":") displays. The completed command is entered for processing by pressing <Ctrl>+m or <Return>. Because mistakes do occur, TCL editor and TCL stack facilities are provided.
The TCL editor allows corrections to commands after they are entered, but before they are executed. The TCL stack each command entered at the TCL prompt and allows you to recall commands for correction and/or execution. Refer to the sections "Editing TCL Commands" and "TCL Stack" below and to the entries tcl stack and tcl edit commands for more information.
TCL commands include the Access and Spooler commands. Access is a system-level information retrieval language that allows you to query your data base without writing complex programs. The Spooler commands allow you to control how information is output to the printer.

Editing TCL Commands
The TCL editor uses the Update processor (UP) to enter commands, so the TCL editor commands are similar to the UP commands. A TCL command may be created and edited as if it were a paragraph. Pressing a <Ctrl>+m or <Return> key within the TCL command processes the entry.
The TCL editor is initially in the overtype mode. To toggle between overtype and insert mode, type <Ctrl>+r. The following commands function the same in the TCL editor as they do in UP. Refer to the keyboard template provided later in this section.
<Ctrl>+

b Move cursor up one line.
e Delete to end of sentence (command).
g Move cursor to end of sentence (command).
h Backspace and replaces character with space.
i Go to next tab position on line.
j Move cursor left.
k Move cursor right.
l Delete character.
m Insert mode: processes the entry when the cursor is at the end of a line; inserts a carriage return/line feed when the cursor is within the line.
n Move cursor down one line.
o Delete from cursor to end of word.
r Toggle between overtype and insert modes.
t Move cursor to beginning of command.
u Move cursor to next word.
w Insert single space.
x Exit TCL command and leaves just the TCL prompt.
y Move cursor back one word.
z z Undo last delete.


TCL Stack
When a command is entered at the TCL prompt, the system saves the command in the TCL-stack file of the dm account. In AP, your stack is not terminal dependent. If you leave a terminal without logging off, another user can use the terminal under your user id. This causes the new user to "step-on" the your stack.
Changing any part of a TCL command in the stack, causes that stack entry to be moved to the top of the stack. This feature tends to keep the stack compact. However, the TCL stack does not have a maximum number of entries and can continue to grow indefinitely. Therefore, from time to time, the stack should be pruned either from TCL or by using the update command to modify the actual stack item (u dm,tcl-stack, user.name).
The following commands are used to move through a stack and to retrieve and run previously entered commands:

<Ctrl> +
a Searches for the entered string.
c p (cut and pop) Removes the current TCL command from its present position; places it at the top of the stack.
d Goes back to the previous command in the stack.
e If the cursor is on the first character, deletes the entry from the stack and displays the next command down the stack; otherwise deletes to the end of the command.
f Goes forward to the next command up in the stack.
p Moves a duplicate copy of the current TCL command at the current position in the stack to the top of the stack.
x Clears the displayed command from the screen and moves the pointer back to the top of the stack.
z Goes to the command at the top of the stack.
z a Same as <Ctrl>+a but searches to the top of the stack.


Pushing Levels
The execution of any command or program can be interrupted by pressing the <break> key. When a command or program is interrupted, the system stops execution and saves all parameters so that execution can be resumed exactly where it was interrupted. When a process is interrupted at the normal system level, the system prompts with two colons (::). At this point the command or program is said to be "pushed one level". Up to 16 levels can be pushed. The number of colons in the prompt indicates the number of levels pushed. The normal system level is 1.
To return to the previous level and continue execution of the process at that level, press <Ctrl>+m. To abort the process at the next lower level, use the end command. Refer to the entries level pushing and levels for more information.

Macros
A macro is a process that executes one or more TCL commands. Macros are stored in the master dictionary with the name of the macro as the item-id. The macro processor is provided for simple TCL procedures. In general, complex procedures should be written as Pick/BASIC programs.
When a macro name is entered at TCL, it may be followed by any number of parameters. These parameters are added to the end of the first TCL command in attribute 2 as additional language elements and then passed for processing.
The first line of a macro must contain the character m (modify mode) or n (non-stop mode). Each subsequent line is considered a TCL command. If the m code is used in the first attribute of the macro, the TCL command is displayed so that changes can be made to it before it is executed. If n is used, the macro runs immediately.
A macro may be created with the Update processor or the create-macro verb. The create-macro verb takes the last statement entered at TCL and converts it to a macro. Enter the TCL statement to store as a macro and press <Ctrl>+m. When the cursor returns to TCL, enter:
create-macro macro.name
Note that create-macro sets attribute 1 of the macro to m. As long as attribute 1 equals an m, the macro name must be enclosed in double quotes when entered at TCL. Use the Update processor to replace the m with an n if immediate execution is desired. To create a macro using the Update processor, enter:
u md macro.name
For more information about using macros, refer to the entries macro and create-macro.

Menus
A menu provides a selection of processing choices. Menus are items in the master dictionary. The menu processor automatically formats the menu on the screen and you can then select one of the menu options for processing by entering the option number. The format of the menu item in the md is:
001 me {comments}
002 title
003 option 1
help 1
statement 1
004 option 2
help 2
statement 2.1
statement 2.2
...
Refer to the entry menus for more detailed information.

TCL Verbs
TCL verbs which operate exclusively on files and items use a consistent format to specify the file and items:
tcl-verb file.reference {item.list} { (options) }
The format elements are explained in the Access section below.

Access Verbs
Access is a system-level information retrieval language that allows you to query your data base without writing complex programs. Access uses TCL commands as verbs and displays the results on terminals or printers. Access verbs operate on specified files and items based on optional criteria, specifications, modifiers, limiters, and options.
Often described as an ad-hoc data query language, the greatly expanded dictionary capabilities of Advanced Pick offer the possibility of real nonprogrammer access to the data base. Access, used in conjunction with the Update processor (UP), makes Advanced Pick one of the most accessible data management system in existence.
Additional AP features enhance the already comprehensive query language. Pick/BASIC calls from dictionaries are used for complex data calculations and output formatting. The ss (spread-sheet) connective allows the user to print out Access reports in spread-sheet format. This is achieved by adding the ss connective to the sort sentence and defining the desired range parameters. B-trees have increased the speed and performance of Access.
Access commands are entered at TCL and thus can be recalled, modified, or executed through the TCL stack. Access sentences may also be stored and invoked through macros, menus, PROCs, and Pick/BASIC.
An Access statement has the following form:
verb file.reference {item.list} {selection criteria} {sort criteria} {output specifications} {print limiters} {modifiers} { ( options ) }
The verb and file.reference are required as operator and operand respectively. The verb must be the first word of the command. All other elements are optional and are used to modify either the operator, operand, or output. Selection criteria, sort criteria, output specifications, print limiters, and modifiers follow the item.list and may be in any order. Options, if used, must be placed last and must be preceded by a left parenthesis. The right parenthesis is optional.
Relational operators can be used with any of the elements of Access sentences to allow exact specification of the conditions to be met. Refer to the entry relational operators for more information.
file.reference
Usually the name of a file in the md to which the user is currently logged. It can also be a synonym file name. The file name can be preceded by the literal dict to access the dictionary of the file instead of the data portion of the file. The default is data. In some cases, data may be specified to indicate only the data portion of the file.
To reference a file in another account or md from TCL, pathnames are used. The pathname may be used in place of the file.name in any TCL or Access statement. A pathname may be entered in one of the following forms:
account.name,dict.name,file.name
account.name,file.name,
dict.name,file.name
item.list
Specifies one or more item-ids in the file defined by the associated file.reference. The item.list may be one or more explicit item-ids, a series of items separated by relational operators, an asterisk (*) to represent all the items in the file, or null.
If a select list is not active, a null item-id implies a new item for UP and all items for the other processors. Any command requiring a select list can obtain it from a previously selected list. (See the get-list, select, and sselect commands.) To cause a processor to use the select list, the item.list must be null. An item-id with the same name as a language element in either the md or the dictionary of the file must be enclosed in single quotes.
selection criteria
Limits the data by specifying criteria that must be met. Multiple criteria may be established to limit data selection to meeting a certain set of limitations. These sets are established by the logical relational connectives "and" or "or".
sort criteria
Connectives used to define the sort operation.
output specifications
Specifies the attributes to list. The selected attribute items or synonym labels are displayed in either a columnar or non-columnar format depending on the report width. The width of the report is the sum of the width of each attribute to be listed plus one blank separator between each attribute. If the width of the report does not exceed the page width as set by the term verb, a columnar format is generated.
The attributes for each item are displayed one under the other. If the requested output exceeds the page width, the column headings are listed in a non-columnar format down the side of the output with their respective values immediately to the right. In the non-columnar format, the column headings are listed only if there is a corresponding value. Item-ids are always displayed unless it is suppressed using the id-supp connective.
print limiters
Suppresses the listing of attributes within an item that do not meet specified limits.
modifiers
Control listing parameters such as double-spacing, control breaks, column totals, and suppression of item-ids, automatic headings, and default messages.
(options)
Used to tell the processor about special handling and tend to be processor specific. The options are single alpha characters and/or a numeric range specification as required by the specific processor. They are usually preceded by a left parenthesis. The right parenthesis is optional. When used, options must be the last element on the command line.

Wild Card Capability
Wild card characters may be used to select item-ids and attributes based on common characters. Wild cards can be used in selection criteria or complex item-lists as follows:
[ (left bracket) Matches characters following the bracket. Ignores characters to the left of the bracket.
] (right bracket) Matches characters from the beginning of the string to the bracket. Ignores the characters to the right of the bracket.
^ (caret) Matches any character in the position occupied by the caret.

Retrieval of Items from Files
The Access processor uses both the master dictionary and the file dictionary to determine the definition of the elements in the Access sentence. The file pointer to the file dictionary and the connectives used in the sentence, for example, are found in the master dictionary. The file pointer to the data file and file-specific attribute definitions are found in the file dictionary. If an element is defined in both the master dictionary and the file dictionary, the definition in the file dictionary is used.
If the element is not found in either the master dictionary or the file dictionary, Access creates a new element by concatenating the unknown element to a blank and the next element in the string. The processor attempts to look up this new element in first the file dictionary and then the master dictionary. If the new item-id is not found, an error message displays. The Access processor does not look up terms in the string that are enclosed in quotes, single quotes, or backslashes. These are assumed to be literals.

Default Output Specifications
In addition to explicitly listing attribute names as part of the Access statement, there are three features that can be used to specify default output specifications. These specifications output the default attributes when attributes are not explicitly specified in the Access statement and are listed below:
- The attribute names can be listed as a macro in the file-defining item.
- Default attribute items can be created.
- Temporary attribute items can be created.

Default Attribute Items
Default attribute items have numeric item-id starting with 1. These item-ids are used by Access verbs as output specifications if no other output specifications are given. The numeric item-ids must be consecutive; that is, in order to have the third attribute list by default, attribute items 1 and 2 must exist, even if they are not needed for the listing. The attribute items for attributes that are not needed for listings can be given a d/code of x . For more information about default attribute items, refer to the entries default attribute items (AP), default attribute items (R83), and default output specifications.

Temporary Attribute Items
Attribute items using special attribute names can be specified in an Access sentence without actually existing in either the file dictionary or the master dictionary. The attribute name is of the form "Aac", where 'ac' is the attribute number. Temporary attribute items are created with a justification code (attribute 9) of lx (left justify and expand display field to fill report).
For example, even if neither the master dictionary nor the file dictionary for ent has an attribute-defining item a14, a statement such as "list inv a14" lists attribute 14 in the ent file where inv is the temporary attribute name.

Spooler
The Pick System Spooler controls all output that is sent to the printer. The term "Spooler" comes from the acronym SPOOL derived from Simultaneous Peripheral Output On-Line. Depending on the printer assignments and the status of the printer, the output may be printed immediately, sent to tape, placed in a queue, or placed in a hold file.
Most Spooler commands allow options, which sometimes have numeric arguments. To keep these options clear for the options interpreter, it is recommended that numeric options be separated with blanks as in the following example that assigns for hold file output to formqueue 1 with 3 copies:
:sp-assign 3 hsf1
Spooler options do not have to be enclosed in parentheses like options with other TCL commands. If numeric options are within parentheses, parameters outside of the parentheses will be ignored. Do not separate options with blanks.
The Spooler directs the items in the queue to the printer as the printer becomes available. The Spooler formats items in a hold file as if they were being output to the printer, but does not actually output them. The Spooler can be directed to output hold file items to the printer, to tape, or to a specified file. The following TCL commands are available to control the Spooler activity:

:startspooler assignfq list-ptr
listabs listpeqs listptr
sp-assign sp-close sp-edit
sp-kill sp-open sp-status
sp-tapeout startptr startspooler
stopptr

For more detailed information about these commands, refer to the individual entries for each command.

TCL Commands
The TCL commands are listed below. For a more detailed description of each command, refer to their individual entries.


! :absload :bootstrap
:files :reset-async :scrub-ovf
:shutdown :startspooler :swd
:swe :swx :swz
:taskinit = ?
abs-dump abs.fid absdump
account-maint account-restore account-save
add add-font addbi
addd addenda addendum
addx after alarm
b/list basic basic-prot
beep bformat blist
block-print bootstrap break-key
brk-debug brk-level buf-map
buffers bulletin.board buffers.g
cal capt capture-off
capture-on case case-file
cat catalog cd
cf charge-to charges
check-account check-dx check-files
check-sum check-ws checkfiles
chg-device chksum choose.term
cl clear-basic-locks cleanpibs
clear-file clear-index clear-locks
clock cls cmdu
coldstart coldstart.log color
comment compare-list compare
compile compile-catalog config
conv-case converse copy
copy-list copydos count
cp create create-abs
create-account create-bfile create-file
create-index create-macro create-nqptrs
cross-index cs ct
currency date db
dcd debug decatalog
define-terminal define-up del-acc
delete delete-account delete-file
delete-index detach-floppy delete-list
detach-sct dev-att dev-det
df diag dir
dir.pick disc disk-usage
diskcomp diskcopy display
div divd divx
dl dm dos
dos.bridge dos.shell dos.video
download dtr dtx
dump ecc echo
ed edit edit-list
el end env
environ epson esc-data
esc-level esc-toggle exchange
exec exit export
f-resize fc fdisk
fid file-save filecomp
find fkeys fl
flush font-parms format
frame-fault free fuser
get-list get.pick gl
group halt-system hash-test
help hush import
import.pick indexer init-abs
init-ovf initovf inputwait
inter iselect isselect
istat item k
kill l ld
ldf legend lerrs
lf lfd lfs
li link-pibdev link-ws
list list-abs list-acc
list-commands list-conn list-device
list-dict list-errors list-file-stats
list-files list-item list-jobs
list-label list-lines list-lists
list-lock-queue list-locks list-logoffs
list-macros list-menus list-obj
list-pibs list-system-errors list-procs
list-ptr list-restore-error list-ports
list-users list-verbs listacc
listbi listc listconn
listdict listf listfiles
listfs listprocs listptr
listu listusers listverbs
ll lm load.mon
lock-frame log-msg log-status
logoff logon logto
loop loop-on lp
lq lre lu
maxusers md-restore message
mirror mlist mload
mmvideo modem-off modem-on
mono monitor-status move-file
msg mul muld
mulx mverify nframe-index
node nselect off
okidata op overflow
ovf p pack
password phantom-status pc
pibstat pick pick-setup
pid pitch-compile pitch-table
poke povf power-off
ppcp prime print-err
print-error printronix prio
prompt psh psr
pverify pxpcmd qselect
r83.setup reboot rebuild-ovf
recover-fd recover-item reformat
rename-file renumber reset-port
reset-user restore-accounts ri
rmbi rnf rtd
run run-list s-dump
save save-list search
search-file search-system sel-restore
select send-message set-8mm
set-abs set-batch set-batchdly
set-baud set-date-format set-cmem
set-date set-date-eur set-break
set-date-std set-device set-dptr
set-esc set-file set-floppy
set-flush set-func set-half
set-imap set-iomap set-kbrd
set-keys set-num-format set-lptr
set-ovf-local set-ovf-reserve set-port
set-runaway-limit set-sct set-sct-dma
set-sym set-shutdown-delay set-sound
set-tape-type set-term set-time
set.lptr set.time setpib0
setport setup-printer setup.rtc
setup.sio sh shell
shl shp-kill shp-status
shpstat shutdown si
sl sleep slice
sm sort sort-item
sort-label sort-list sort-users
sortc sortu speller
sreformat sselect stack
start.rtc start.ss startlog
startsched startshp stat
status-port steal-file stoplog
stopsched strip-source sub
subd subx sum
system-coldstart t-att t-bck
t-bsf t-bsr t-chk
t-det t-dump t-eod
t-erase t-fsf t-fsr
t-fwd t-load t-rdlbl
t-read t-ret t-reten
t-rew t-select t-space
t-stat t-status t-unld
t-unload t-verify t-weof
t-wtlbl ta tabs
tandem tape-socket tcl
tcl-hdr term term-type
termp test-cursor time
time-date timedate tlog-restore
to touch trap
txlog type type-ahead
unix unlink-pibdev unlock-frame
unlock-group unlock-item unpack
update update-abs-stamp
update-accounts update-logging update-md
update-prot user-coldstart useralarm
user-shutdown verify-index verify-abs
verify-system vga.lcd video.demo
what where which
which-line who wlist
wselect wsort wsselect
x-ref xcs xonoff
xref xtd z
zh zhs

Connectives
Connectives are words in the master dictionary that are used to form the elements of Access statements. They include relational operators and modifiers and are used to form sort and selection criteria and limit data to be processed by the verb with which they are used.
Relational operators are used to establish criteria based on the relationship of data to fixed values or other data. Relational operators would be used to select a range of zip code values within specified upper and lower limits. Refer to the entry relational operators for more information.
Other modifiers and options are listed below. For more information on each of these connectives, refer to their individual entries.

any before break-on
by by-dsnd by-exp
by-exp-dsnd col-hdr-supp dbl-spc
data entry det-supp dict
duplicate each every
fill footing grand-total
hdr-supp heading header
id-prompt if {each|every} id-supp
if{no} legend-supp lptr
ni-supp no nopage
only or roll-on
sampling spread-sheet ss
supp tape tcl-supp
total total-on using
with within without
Syntax
Options
Example
Purpose
Related

basic.fold

Command basic.fold Function/BASIC Program
Applicable release versions: AP
Category BASIC Program (486)
Description "folds" a string.expression into a string of a given length or lengths.

The "fold.length.expression" specifies the length(s) at which the string.expression will be folded. If the fold.length.expression is omitted, it defaults to 25. Multiple numeric expressions, separated by value marks, may be specified in this parameter.

The text is folded so the length of the first line is less than or equal to the value of the first numeric value in the fold.length.expression, the length of the second line is less than or equal to the value of the second numeric value in the fold.length.expression, and so on. If more strings exist than corresponding number of fold.length.expressions, the last fold.length.expression is applied to the remaining strings. If possible, the text is folded on a space.

The "delimiter" parameter is the delimiter used in the folded text. This parameter is required by the compiler, but the parameter may be null, in which case a value mark ( char(253) ) is used. See example 2.
Syntax fold(string.expression,fold.length.expression,delimiter)
Options
Example
equ svm to char(252)
input string
string=fold(string,10,svm)
print string

When the string:

this is a test string to demonstrate "fold"

is entered, the string is embedded with the requested subvalue marks as follows:

this is ateststring todemonstrate "fold"

a = fold(a,25,"")
-or-
delim = ""
a = fold(a,25,delim)

In both of these examples, a value mark is used as the fold delimiter.
Purpose
Related basic.statements
basic.occurs
basic.count
basic.functions

tcl.shp-status

Command tcl.shp-status Verb: Access/TCL
Applicable release versions: AP/Unix
Category TCL (746)
Description displays status information about printers shared with Unix, started by the TCL command "startshp".

The following information is displayed:

'VMname'
Name of the Pick virtual machine which uses the shared printer.

'Prt'
Pick printer number, as specified in the startshp copmmand.

'Port'
Pick port number the printer process was started on.

'PID'
PID of the 'lppick' process associated to the Pick printer process. This process acts as a filter between the Pick printer output, which is a continuous data stream, separated by 'end of job' sequences, and the Unix spooler which accepts separate jobs.

'Spooler command'
Unix command used to spool data. Only the first 32 characters of the command are displayed.

'Status'
This field report the activity or the Pick printer process and the existence of the lppick filter. The possible values are:

'OK'
Both processes are alive.

'prt OK'
The Pick printer is alive and seems in a normal state.

'prt ERR'
The Pick printer is alive but appears in a wrong state.

'prt DEAD'
The Pick printer process has been killed.

'lp OK'
The lppick process is alive.

'lp DEAD'
The lppick process has been killed.

For normal operations, the status should be 'OK'. However, if the the filter process gets killed, the output of the Pick printer process will not be able to be sent to Unix. Issuing the 'startshp' command again should clear the situation and restart the necessary processes.

If the (T) option is used, the trace information recorded is displayed, starting with the newest trace entry. The following information is displayed:

tr# Date Time Description

where:

tr# : Trace number in decimal.
Date : Date.
Time : Time (Note: The Pick time is displayed)

Description:
"Start job"
Beginning of a job

"End job, size=N (-1,10)"
End of a job. 'N' is the size in decimal. The values between parentheses are internal return codes.

"Write data, size=N"
Write 'N' bytes of data to the 'lp' command.

"Read error, errno=X"
Encountered a read error on stdin. 'X' is the decimal value or 'errno'

"Write error, errno=X, expsz=N, sz=M"
Encountered a write error on stdout (to the lp command). 'X' is the decimal value of errno, 'N' is the number of bytes we attempted to write, and 'M' the actual number of bytes written.

"Raw read, size=N,'<texte>'"
Raw data read from stdin (coming from the Pick printer). Non printable characters are replaced by a '.' 'N' is the total size. This trace is available only when the trace level 3 is used.

"Raw write, size=N,'<texte>'"
Raw data written on stdout (to the lp command). Non printable characters are replaced by a '.' 'N' is the total size. This trace is available only when the trace level 3 is used.
Syntax shp-status {(options}
Options t{n} Display trace information. If 'n' is nort specified, all traces are shown, starting by the most recent. If 'n' is specified, only that number is shown for each printer.
Example
shp-status

VMname Prt Port  PID Spooler command      Status
------ --- ---- ---- -------------------- ---------------
pick0    0  126 1763 'exec lp -onobanner' OK
dev      0   32 1765 'exec cat >> xx'     prt OK  lp DEAD
prod     0   44  345 'exec lp -s'         prt ERR lp OK

The first line indicates that the first shared printer is the printer 0 of the 
virtual machine 'pick0', started on the port 126, sending its data to 
the normal 'lp' command, suppressing the banner. It appears to be ok.

The second line indicates that the second shared printer is the printer 0 of 
the virtual machine 'dev', started on the port 32, sending its output 
to the Unix file 'xx' (on whatever directory the command 
'startshp' was issued on). The Pick printer process is still up, but 
the 'lppick' filter is dead, possibly because the file 'xx' 
was write protected, or may be because the Unix file system is full.

The third line indicates that the second shared printer is the printer 0 of the 
virtual machine 'prod', started on the port 44, sending its data to 
the normal 'lp' command, suppressing lp messages. The Pick printer 
process is still up, but in an incorrect state, probably because a 
'sp-kill' command was issued, which sent the process back to logon, 
without terminating the UNix process. The 'lppick' process appears 
OK. To terminate a shared printer properly, use the TCL command 
'shp-kill'.


shp-status (t

VMname Prt Port  PID Spooler command       Status
------ --- ---- ---- --------------------- ---------------
pick0    0  126 1763 'exec lp -onobanner'  OK
                     tr# Date. Time.... Description
                     3  03/03 13:14:00 End job, size=1223
                     2  03/03 13:13:55 Start job
                     1  03/03 13:14:00 Start lppick

Display trace information.
Purpose
Related tcl.startshp
unix.lppick
tcl.shp-kill
tcl.listptr

connectivity.to.unix.in.ap

Command connectivity.to.unix.in.ap Article/Article
Applicable release versions: AP
Category Article (24)
Description integrating Advanced Pick and Unix.

Now that Pick Systems has begun implementing Advanced Pick on various implementations of the Unix Operating System, the most asked question is how to develop applications that can interact between the two environments. Advanced Pick includes several methods to take advantage of the Unix environment including file transfer utilities, several 'C' functions from FlashBASIC or Pick/BASIC and use of all Unix verbs from TCL. This article will focus on the abilities of the file transfer utilities and their use in application development. Future articles will concentrate on the other aspects of the Unix environment. Advanced Pick includes two utilities called IMPORT and EXPORT to move files back and forth between Unix and Pick. To convert a Unix file into the Pick database, use the IMPORT verb to convert the file from Unix file format to Pick. The following example will convert the INITTAB file from Unix into Pick file format and store the item in the master dictionary:

IMPORT MD INITTAB
From: /etc/inittab

The EXPORT facility works in much the same way. The following example will convert the item '12345' from the customer file into Unix file format and store the item in the Unix file /usr/lib/pick/test.item.

EXPORT CUST 12345
To: /usr/lib/pick/test.item

Using these two Pick commands, we can create an application in Unix to transfer files back and forth between the two environments as necessary. The process can be run by a user from the command prompt or as a scheduled job through CRON. To do this, we will need to create a script that can start a Pick process, logon to a particular user and account, execute a macro, proc or basic program and then clean up the process and return to Unix.
Most Advanced Pick/Unix systems are configured as a turn-key system. This means that the inittab file has been configured so that each port will display a Pick logon banner and that the users will never have to deal with Unix. There are two ways to logon to the Pick machine from a port that is configured for Unix. Simply enter ing 'ap' at the command prompt will connect the user to the next available Pick process, assuming that the Pick machine has been started. For our example, this method is not practical as you never know which Pick process you will become. Another way to connect to the Pick virtual machine is to specify the port or process number in the original command line. For example, we could always set aside Pick process 20 as the process used for this script. We would start this process by entering 'ap - 20' at the command prompt. This command to start a Pick process can be expanded to include user and account names to logon, commands to execute at TCL, etc. This is accomplished by stacking data in the command to be executed once the process is activated. Continuing with our example, the following syntax will start a Pick process on line 20, logon as the user DM to the account BA, set the terminal type to an IBM3151 and execute the command 'menu' at TCL:

ap - 20 -d 'rxdmrbarterm ibm3151rmenur'

The '-d' specifies the following string is to be stacked data after the process begins. The entire string must be enclosed in quotes and the meaning of the slash commands are as follows:

r Insert a carriage return.
f Turn echo off. The stacked data will not be displayed.
n Turn echo on. The stacked data will be displayed.
Insert a back slash.
x Exit read. It is used to control the system when stacking commands which empty the input buffer by executing a cancel type ahead commands. Whenever such a command is issued, all data up to the x are deleted. When activating a process for the first time, the system reads one character, then empties the type ahead buffer. Therefore, stacked data should always start by the sequence rx, followed by logon sequence.

The passed data string can include any valid TCL commands including procs, macros and FlashBASIC or Pick/BASIC programs and the passed string is unlimited in size. It is possible for the entire data string to be contained in a Unix file by using the shell command substitution mechanism. For example, the above commands could be placed in the Unix file /usr/lib/pick/logon and used in the following manner:

ap - 20 -d 'cat /usr/lib/pick/logon'

This function could be used to log onto the Pick machine, transfer a file to Unix file format and then execute any Unix command against that file. An example would be if a company had multiple individual stores and wished to send sales information to the computer at the main office. Each evening, the computer at the individual stores could log on a process on the Pick machine, convert the daily sales information into a Unix file and then use any Unix facility to upload the information to the computer at the main office. The facility used to send the information could range from something as simple as modem communications to something as complicated as a network. We could automate this function through the use of the Unix feature CRON, which can schedule jobs to be run at any time ranging from every minute to once a year. The following example would log onto the Pick machine, run a FlashBASIC or Pick/BASIC program called BUILD.SALES and export the resulting file to Unix:

ap -d 'rxdmrsalesrbuild.salesrexport md sales/temp/sales/exit'

To automate this process, please consult your Unix manual on the syntax and usage of the CRON function.
Syntax
Options
Example
Purpose
Related

tcl.brk-debug

Command tcl.brk-debug Verb: Access/TCL
Applicable release versions: AP
Category TCL (746)
Description indicates that the break key will invoke the debugger on subsequent uses.

If the current process is a Pick/BASIC program, the Pick/BASIC debugger is invoked. In all other cases, the system (virtual) debugger is invoked. If the <break> key is set to push a level, the debugger may be entered with the "debug" or "de" command.

On some systems, when the <break> key is set to push a level, it is not possible to push a level while in the debugger. To push a level while in the debugger, enter a colon (:) followed by <return> or <enter>.
Syntax
Options
Example
Purpose
Related tcl.level.pushing
tcl.esc-level
levels
tcl.break-key-on
tcl.break-key
tcl.break-key-off
tcl.r83.setup
tcl.brk-level
tcl.esc-data
tcl.debug
basic.debug
system.debugger.end
system.debugger.:
ue.218d

tcl.reblock-ovf

Command tcl.reblock-ovf Verb: Access/TCL
Applicable release versions: AP 6.1
Category TCL (746)
Description forces contiguous overflow blocks which exist in different internal tables to be compared and combined if possible.

In releases 6.1.0 and above, the overflow is split into a "safe" table which contains the largest available blocks, and the normal "b-tree" table which contains smaller blocks. After a crash, the "safe" table is used to recover the overflow to a known state. Because of this separation, it is sometimes possible to have blocks which appear to be numerically contiguous, but which are not combined because they exist in different tables. The "reblock-ovf" verb will correct this situation.
Syntax
Options
Example
Purpose
Related

tcl.basic-prot

Command tcl.basic-prot Verb: Access/TCL
Applicable release versions: AP 5.2.5, AP 6.0
Category TCL (746)
Description toggles or displays the status of the Pick/BASIC object protection scheme.
When enabled, this feature is global, thus it affects the entire system.

The Pick System shares Pick/BASIC object code between all processes running a given program. While this vastly decreases memory requirements, it also opens the possibility of one user compiling a program while another user is concurrently running that same routine. This circumstance tends to produce random, unexplainable aborts that can be difficult to track on large systems. The protection scheme involves insulating running object code from updates caused by recompilation.

When protection is enabled, all previous revisions of Pick/BASIC object code are kept in the same dictionary group, but are simply marked as "deleted". These "deleted" items are automatically cleared during the "save" process. (See the discussion of "dirty bits" in the topic on the "save" verb.) This allows compiling programs while they are currently being executed. Users running a given program when that program is compiled will continue to run the old version. If a user drops out of the program, to TCL, for example, and re-executes the program, the system will execute the newest object version.
Syntax basic-prot {(option}
Options f Toggles object protection off.

n Toggles object protection on.
Example
Purpose
Related tcl.basic-prot-on
tcl.basic-prot-off
tcl.user-coldstart
tcl.save

tcl.buffers.g

Command tcl.buffers.g Verb: Access/TCL
Applicable release versions: AP
Category TCL (746)
Description produces a graphic histogram of buffer usage for a range of dates and times.

"counter" is the attribute name in the "dm,buffers.log," file to examine. The available attribute names are:

0 time Times.
1 activ Activations.
2 idle Idle time.
3 fflt Frame faults.
4 writes Disk writes.
5 bfail Buffer search fails.
6 fqfull Read queue fulls.
7 wqfull Write queue fulls.
8 dskerr Disk errors.
9 elapsd Elapsed time.

21 ww Write requireds.
22 iobusy I/O busy.
23 mlock Memory locked.
24 ref Referenced.
25 wq Enqueued writes (write queues).
26 tophsh Top-of-hash.
27 avail Number of available buffers.
28 batch Batch.

Additional attributes available on a hosted Unix system are:

10 dblsrc Double-source.
11 breuse buffers re-used.
12 bsleep buffers sleeping.
13 sem semaphores.

"start.day" is the beginning day-of-the-week for the graph results. The available values are: sunday - saturday, or 0 - 6.

"end.day" is the ending day-of-the-week for the graph results. The available values are: sunday - saturday, or 0 - 6.

"*" Spans the entire week.

"start.time" is the beginning time-of-day for the graph results. The valid values are: 00:00:00 - 23:59:59.

"end.time" is the ending time-of-day for the graph results. The valid values are: 00:00:00 - 23:59:59.

The reports are histogram averages of the buffer values sampled over a period of time (from the "buffers" command). These reports can give the System Administrator a better idea of the workload of the Pick system, and identify possible bottlenecks in the system's performance.

The activity log is stored in the file buffers.log with a data level per weekday (buffer.log,Monday, buffer.log,Tuesday, etc... ). The file is created automatically when the buffers (H) command is used for the first time. Each data level is cleared when changing day, so that the file records a whole week of activity automatically. The itemid is the internal time on five digits.

The buffers command also creates automatically the dictionary attributes corresponding to the various counters, as shown in the table above. The attribute TIME displays the sampling time.

The attribute DESCRIPTION in the D pointers Monday, Tuesday etc... contains the date.

The file is created with a DX attribute.
Syntax buffers.g counter {start.day{-end.day} {step {start.time-{end.time}}} {(option}

buffers.g counter {* {step {start.time- {end.time}}} {(option}
Options g Displays a graph, rather than a histogram. With this option, the 'step' is automatically calculated. With this option, the results are averaged in an attempt to make the curve smoother.

p Direct output to printer.
Example
buffers.g sem  6

0      1      2      3      4      5      6...
+------+------+------+------+------+------+---
16:52:05
16:52:11
16:52:18 **************
16:52:25
16:52:31
16:52:38 *******
16:52:45 *********************
16:52:51
16:52:58
16:53:05 *******
16:53:12 *******
16:53:18

Number of samples   : 13
Total               : 14
Average per period  : 0.0002 / sec.
Max value           : 4
Max value /s        : 0.2857
Peak time           : 16:52:45


buffers.g writes tuesday (g
49.0 *                                             
46.3 |                                           * 
43.5 |                  *                      **  
40.8 |      ***       *  *    ***                  
38.1 |     *   *    ** *    **         **          
35.4 |    *     *         **     *  ***  *   **    
32.7 |*  *       ***              **        *      
29.9 |                                    **       
27.2 | **                                          
24.5 |                                             
21.8 |                                             
19.0 |                                             
16.3 |                                             
13.6 |                                             
10.9 |                                             
 8.2 |                                             
 5.4 |                                             
 2.7 |                                             
 0/s +------+------+------+------+------+------+------
   09:09:26      09:11:46      09:14:06      09:16:26 

buffers.g fflt * 01:00:00 

List the number of frames faults (disk reads), for the whole week, by step of 
one hour. In the example below, no history was recorded before Wednesday.

No log for Sunday

No log for Monday

No log for Tuesday

20Feb1991; Wednesday; Ctr=fflt, Step=01:00:00, Range=00:00:00-23:59:59

         0      8848   17696  26544  35392  44240  53088  61936  
         +------+------+------+------+------+------+------+------+----
10:59:28 *************************
11:59:54 ***********************************************************
13:00:25 **********************************************************
14:00:52 ************************************
15:01:18 ***************************
16:01:49 ********************************************************
17:02:22 ***************************************
18:02:55 ******
19:03:32 ***********************************************
20:04:08 *************************************************
21:04:43 
22:05:21 ***************************************************
23:05:55 *************

Number of samples   : 155
Total               : 622070
Average per period  : 7.1999 / sec.
Max value           : 88481
Peak time           : 13:00:25

buffers.g ww monday-friday 00:30 08:00-17:30 (p

List the percentage of write required write required buffers, for the week days 
only, during business hours, by steps of 30 minutes.
 

Interpreting Results

After taking a significant sample, list the results with the buffers.g command 
. The most useful parameters to survey are: 


Fflt    This measures the number of frame faults. If this number approaches the 
disk bandwidth as determined by the manufacturer, the system becomes disk 
bound. Solutions range from increasing the memory allocated to Pick, to 
changing disks, or reorganizing the Pick data base on separate disks to 
increase parallelism.

Writes  This number should stay about one third to a half of the number of 
frame faults. It is not 'normal' for a system to do more writes than 
it reads, under normal operation. If this is not the case, see the section 
'Flusher Adjustment' in this article.

Bfail   This number should never be non zero. If it is not the case, the memory 
allocated to Pick is definitely too small.

WqFull  This number should not be non-zero 'too often'. If it is the 
case, and if the number of writes is too big also, there is an abnormal rate of 
writes. See the section 'Flusher Adjustment' in this article.

Bcolls  If this number becomes too high, this indicates that a lot of batch 
jobs (like selects of big files) are done while other processes are doing data 
entry. It is also an indicator that indeed interactive jobs are receiving 
higher priority than batch processes. See the section 'Interactive - Batch 
Processes' below.

ww      This number should never go above 50 % of the whole buffer pool. If 
this is the case, the flusher is probably not activated often enough. See the 
section 'Flushed Adjustment' below.

avail   This number should never go below 10% of the whole buffer pool. If this 
is the case, memory must be increased or the flusher must be adjusted.
Purpose
Related tcl.buf-map
tcl.monitor-status
tcl.flush
tcl.set-flush
flusher
tcl.buffers

tcl.logto

Command tcl.logto Verb: Access/TCL
Applicable release versions: AP, AP 6.2, R83
Category TCL (746)
Description terminates accounting on the current account, then moves to another specified account. If a password is present, it must be provided. Passwords are case-sensitive.

If the password is omitted and required, the system prompts for it.

It is possible to "logto" another account while at a "pushed" level. When a <return> is issued at the TCL prompt, the process automatically "logs back to" the original account unless the (F option was specified for the "logto" verb. In that case every "pushed" level below the current one also logs to the new account. Those pushed levels are still running the programs they were originally running, they are just in a new account now.

Any tape or magnetic media devices attached to the process when logging to another account at another level remain attached in the new account.
Syntax logto account.name{,password}
to account.name{,password}
Options F The F option forces all pushed levels to also log to the new account (6.2 and above)
Example
logto dm

< Connect time= 217 Mins.; CPU= 46 Units; LPTR pages= 0     >

logto dm,mypassword
< Connect time= 217 Mins.; CPU= 46 Units; LPTR pages= 0     >

Legend:

"connect time" is the number of minutes the current account was in 
use.

"cpu" is the number of cpu units used by the account. CPU units vary 
from system to system, but are generally recorded as 1/10th's seconds. 
Note: The cpu units shown are about 100 times smaller than the cpu units shown 
in the Pick/BASIC 'system( 9 )' function.

"lptr pages" is the number of pages sent to the Spooler.
Purpose
Related tcl.charge-to
tcl.charges
tcl.logon
tcl.off
tcl.to
tcl.create-account_rp

general.header.q.ptr

Command general.header.q.ptr Definition/General
Applicable release versions: AP 6.2
Category General (155)
Description accessing header information using normal Pick utilities.
Through the 6.2 OSFI, it is possible to access information about an item such as update stamps, permissions, ownership, and driver-specific data. The header driver translates this information into a format which looks like a standard Pick item.

Note that access to the header information via this driver is read-only. The only way to modify the header information is through the normal update routines. This is enabled via the "Y" correlative for Pick files. Header information updates on non-Pick items depend on the behavior of the remote file system to which those items belong.

Any utility which physically moves the data (like "copy") changes the header information.

The "save" and restore utilities save and restore the header information as well.

Raw Attribute Definitions :

When reading items via the header driver, the items are returned as a dynamic array with the following raw attribute definitions:


# Description

1 User ID - The Pick user name or the Unix user number in hex of the last user to update this item.

2 Pib - The Pick PIB (in hexadecimal) of the last user to update this item. This field is undefined for non-Pick drivers.

3 Time/date - A hexadecimal representation of the number of seconds elapsed since 12:00 AM December 31, 1967 and the time the item was last updated.

4 Permissions - A hexadecimal number representing the permissions on the item. This currently only applies to non-Pick items.

5 GroupID - This is the Group ID (in hexadecimal). This is currently used only by the Unix driver, but may be used by other drivers in the future.

Other attributes are driver-specific.

Q-Pointer Format :

The format of the header Q-pointer is:
file.name
001 Q
002
003 hdr:filename

'hdr' is the name of the 'hdr' host in the 'dm,hosts,' file.

'filename' is the name of the target file to examine. This may be a local Pick file (assuming the Y correlative has been added to the D-pointer), or a remote file (Unix or Dos).

The file may also be opened by pre-pending the filename with the string "hdr:".
Syntax
Options
Example
Purpose
Related general.super.q.ptr
general.remote
filename.hosts
qs-pointer
tcl.save
tcl.copy

runoff.intro

Command runoff.intro Introductory/Runoff: Commands
Applicable release versions: AP, R83
Category Runoff: Commands (93)
Description facilitates the preparation and maintenance of textual material such as memos, manuals, etc.

The "runoff" command invokes the output function of the "Runoff Processor". Text stored with embedded commands is formatted for output to a terminal or printer. See "commands, runoff".

Runoff source text contains commands which control justification, page headings and footings, numbering, spacing and capitalization.
Textual material prepared with Runoff may be easily edited and corrected with the "line editor" or "Update processor" and then reprinted with Runoff.

Runoff also provides the capability of combining separate textual material into a single report and inserting duplicate text into different reports.

Multiple input items are treated as a single source text file.

A source text item may contain a command which causes Runoff to "chain" to another file item. This makes it possible to "link" file items together without doing a "select" or "sselect".

Items included in "itemlist" may chain to other items within the same file. When the "chain" ends, processing continues with the next item from the "itemlist".

A source text item may also contain a command which causes Runoff to "read" a second file item and then resume processing of the first item. This makes it possible to insert the text from a single file item in the output from many other file items.

Runoff commands are stored along with the textual material in the source file, and each Runoff command must be preceded by a period.
Syntax runoff file.reference itemlist* {(options)}
Options * see "options: Runoff".
Example
Purpose
Related tcl.itemlist*
runoff.options
runoff.commands
tcl.runoff

basic.call

Command basic.call Statement/BASIC Program
Applicable release versions: AP, AP 6.1, R83
Category BASIC Program (486)
Description transfers control to an external Pick/BASIC subroutine and optionally passes a list of arguments to it.

Arguments must be separated by commas (,). When passing arguments, the same number must occur in the "call" statement as are declared in the "subroutine" statement, which must occur as the first executable line of the external subroutine. There is a maximum of about 200 arguments. The subroutine can return values to the calling program in variable arguments.

The external subroutine must ultimately execute a "return" statement to continue program execution at the statement immediately following the "call" statement. Subroutines which do not return will simply terminate execution.

The "call @", or "indirect call" form allows the statement to use the subroutine name assigned to a specific variable.

Called subroutines must be compiled and cataloged separately from the calling program. The arguments passed between (to and from) the program and subroutine are not label-sensitive, but are order-sensitive. The arguments listed in the "subroutine" statement and the arguments listed in the "call" statement may be different. The subroutine receives the results of arguments in the order that they are specified in the argument list.

Variable arguments are passed to the subroutine at "call" time and from the subroutine at "return" time.

Arrays may be passed between programs and subroutines. The array in the program and called subroutine must contain the same number of elements. If dimensioned arrays are used, the arrays should be dimensioned exactly the same in both the program and subroutine (see the "dim" statement). Alternately, a "dim" statement may be specified without the actual number of elements. The array is properly initialized at run-time.

Arguments listed in both the "call" and "subroutine" statements should not be duplicated in the argument lists. Arguments that are also defined as "common" variables in both calling programs and subroutine programs should not be used in argument lists since the data is already accessible through the common allocations. Violation of these rules can result in unpredictable values being passed between the programs.

On releases 6.1.0 and above, it is possible to specify the file path name followed by a space followed by the actual subroutine name. To do this, the file path and program name can be passed via a variable to an indirect call, or the string can be enclosed in quotes and embedded directly into the program text. Specifying a direct file reference and program name eliminates the need for cataloging subroutines when an application is used from other accounts.

On releases 6.1.0 and above, if the subroutine is not catalog'ed, then BASIC will attempt to locate the subroutine in the current program's dictionary. This obviates the need to catalog most subroutines.
Syntax call cataloged.program.name{(argument{,argument...})}
call @program.name.variable{(argument{,argument...})}
call "file.reference program.name"{(argument{,argument...})}
Options
Example
direct call: 

call process.lines(id,order.item(1)) 
-or-
call "process.lines"(id,order.item(1)) 

With or without quotes, this example still calls the "process.lines" 
subroutine. 

indirect call: 

program.variable = "process.lines"
call @program.variable(id,order.item(1)) 

This example calls the subroutine name held as a string in the variable 
"program.variable".

indirect call with full path name

program.variable = "dm,bp, process.lines"
call @program.variable(id,order.item(1))
Purpose
Related basic.statements
basic.subroutine
basic.dim
basic.precision
basic.tcl
basic.enter
basic.chain
tcl.date.iconv
basic.return
ue.31a2
basic.common
flash.basic.differences

tcl.tape-socket

Command tcl.tape-socket Verb: Access/TCL
Applicable release versions:
Category TCL (746)
Description defines a tape system across a network. This section is a detailed reference of the "tape-socket" TCL command. See the section "tape socket, General" for an introduction to the fundamental notions necessary to the set up and utilization of this system.
The "tape-socket" TCL command is used to create the input or output server and control their activity. A tape-socket log file "ts.log" is created the first time the command is started to record the process activity, and a permanent log "ts.log,log" keeps all messages.

This command can be executed only on the 'dm' or 'sysprog' account. It requires a SYS2 privilege.


Commands :

"cmd" is one of the following, where allowed abbreviation are shown between parenthesis:

check (ch) Check remote Server.
drain (dr) Drain (empty) pipe.
query (q) Query a server.
setarg () Change argument.
setup () Setup Server parameters.
show () Show Server parameters.
shutdown () Stop BOTH Servers.
start (ss) Start a server by name..
startsend (ss) Start the output server.
startreceive (sr) Start the input server.
status (sta) List the server status.
stop (sto) Stop a server.
stopall (stopa) Stop all servers.
traceon (tron) Turn traces on.
traceoff (troff) Turn traces off.

Without any argument, a menu is displayed, showing the most useful options to manage the default server. The menu is described later in this section.


Arguments :

The arguments are specified by a series of statements "keyword=value". Arguments can be specified in any order. If "value" is a question mark (?), the user is asked to enter a value for the specified keyword, at which point a question mark (?) will display some help and 'q' will quit. If "value" is a dot (.), the value used last time is substituted. These forms are used in macros.

"keyword" is one of the following:

callrep Number of attempts the output server will do to call the input server. After this number has been reached, and if the input server does not respond, the output server will terminate. '0' means infinite number of attempts to call. The default value is 0. A 5 second delay occurs between each attempt.

cmd Command to be executed on the remote system. See the list of valid commands below in the section "Check Command". This form is used to check the communication link. See the example section. Valid only on version 1.4.1 and later.

host Network name of the host where the input server resides. This argument is required only for the output server to be able to reach the input server. The host name must be defined in the '/etc/hosts' file in the sender.

ndisc Maximum number of network disconnection(s) either server will tolerate. A value of 0 means infinite number. When a network disconnection occurs, the output server will try to call the input server again. The default value is 0.

notify Name of one or more Pick user(s) to whom error messages are sent. The notify list may have one of the following forms:
OFF : Disable the notification.
user : Pick user name.
! line : Pick line number
* : All Pick users.
exec cmd : Run 'cmd'.

More than one user can be specified, by separating each user by a comma (eg notify=bob,sam). If the form 'cmd' is used, the entire notification list must be enclosed between quotes, because of the space which follows the 'exec' key word. For example, notify="dm,!0,exec run bp send-mail". The text of the error message is added after 'cmd'. The users must exist in the "dm,users," file. Valid only on version 1.5.0 and later.

pib Pick port number of the server. This option can be used as an argument to the "query" and "stop" commands as a quick alternative to the form "pipe=device".

pipe Define the Unix pipe. A full path name must be provided (eg /dev/tapein). The pipe MUST exist and have appropriate read/write permissions. A raw device device name will be accepted as well.

poll Defines the period with which the transaction logger is tested. "poll" is expressed in seconds, or in HH:MM:SS. A value of 0 disables the transaction log test polling. See the section "tape socket, General" for a detailed description of this feature. Valid only on version 1.5.0 and later.

port Socket port number in decimal of the input server on the receiver's side. The socket port number is a convention between the input and outputserver. It is a decimal number between 1024 and 32767.

prot Network protocol. The following protocols are supported:
"inet" : Internet.

server Name of the server. If left empty, the default server is used. The Server name can be any string. Valid only on version 1.5.0 and later.

servertype Type of the server. This option is required when using the "setup" command to set up the running parameters of a Server. "servertype" is either 'IN' for the Input Server, or 'OUT' for the Output Server. Valid only on version 1.5.0 and later.

trace Maximum number of traces either server will keep in the log file. The default value is 4. Without the (V) (verbose) option, only major events are recorded. The (V) option records ALL data. When this number is exceeded, the oldest trace entries are discarded.

txlog Specify whether the Server is to be linked automatically to the Transaction Logger. "txlog" is either 'ON', or 'OFF'. See the section "tape socket, General" for a detailed description of this feature. Valid only on version 1.5.0 and later.

txopt Specify whether the Transaction Logger should log updates to all files, or only to files with the (DL) attribute. This option is valid only for the Output Server, if 'txlog=ON'. 'txopt' is either 'DL' or 'ALL'. Valid only on version 1.5.0 and later.

txpriod Specify the period, in seconds, with which the transaction log queue is emptied. A value of 0 specifies the default value. Valid only on version 1.5.0 and later. On AP versions prior to 6.1, this time cannot be changed and must be specified as 0.


Status Command :

The "status" commands lists the following information:
id Port number of the server, in decimal.

T Type of the server:
s : Send (output) server
r : Receive (input) server

Pipe Pipe name

Host Host name (input server only).

Port Socket port number in decimal

S Status of the server;
E : Error
C : Completed
L : Logoff
Q : Queued
R : Running
S : Aborted

Time Time of the last trace entry

Date Date of the last trace entry

Message Trace message. Each trace is prefixed by the current message number in decimal. The servers exchange message number information to make sure no data loss occurs. The following are the main messages:

ACK timeout The input server did not respond to a message. The output server retries.

Accept err=n accept() system call error. n=errno.

Bad header 'X' A network message had an incorrectly formatted header. 'X' is a hex dump of the header.

Bad msg num 'X' A network header contained an incorrect message number 'X'

Bind err=n Input server could not 'bind' with the specified port. n=errno.

Broken pipe Input server tried to write into pipe, but associated tape process detached from it.

Call err=n Output server can not call input server. n=errno.

Cannot find jobid Server could not find its job id in the phantom log files 'jobs'.

Check ERR The input Server responded to a 'check' command but found an error.

Check OK The Input Server responded to a 'check' command.

Clear pipe Clear the pipe, if NO (C) option.

Connect accept The input server accepted an incoming connection.

Connect err=n Output server failed to establish connection. n=errno.

Hread trunc=n Network msg header truncated. n=msg length.

Listen err=n listen() system call error. n=errno.

Local query Server responded to a 'tape-status query' command.

Lost msg=n Input server detected a message loss. n is the message received. Messages from the current message up to n are lost.

Lost POLL n A Transaction Log test item has not been received by the Input Server.

Malloc err=n malloc() system call error. n=errno. Server could not obtain memory for buffers.

Nread err=n Network read error. n=errno.

Nread n xxxxx Network read. n=msg length, 'xxxxx' trace.

Nwrit err=n Network write error. n=errno.

Nwrit n xxxxx Network write. n=msg length, 'xxxxx' trace.

Nwrit trunc=n Network write truncated. n=msg length.

Open pipe Wait for pipe open.

Pclear err=n Error while attempting to purge the pipe. n=errno.

PEOF err=n Input server failed to write a EOF marker in the pipe.

Popen err=n open() pipe error. n=errno.

Pread err=n Pipe read error. n=errno.

Pread n xxxxx Pipe read. n=msg length, 'xxxxx' trace.

Pread trunc=n Pipe read truncated. n=msg length.

Pwrit err=n Pipe write error. n=errno.

Pwrit n xxxxx Pipe write. n=msg length, 'xxxxx' trace.

Pwrit trunc=n Pipe write truncated. n=msg length.

Re-sync n Input server receives a SYNC message from output server. n=new starting msg number.

Send resync Output server was asked to send a sync message.

Remote shtdwn The Input Serer receives a request to stop.

Running Server is running. This status is stored every 5 mn on a busy system. This message is not stored in the permanent log.

Sent POLL A Transaction log test item has been sent to the Input Server.

Seq error n An message was received twice. n=old message number. The msg is discarded.

Started Server is started.

Stop on req Server stopped due to a 'tape-socket stop' command.

Stop refused-Txlog up A request to stop the input server was refused because the transaction logger is still active. Repeat the stop request.

Stopped Server is stopped due a spontaneous termination. The cause of the termination is indicated in a previous trace entry.

Socket err=n socket() system call error. n=errno.

TLOG not off The Input Server failed to abort the transaction restore process following a request to stop.

TLOG Restarted The Input Server restarted the transaction restore process following a request from the remote Output Server.

TLOG Terminated The Input Server aborted the transaction restore process following a request to stop.

Too many disc Server detected disconnects in excess of 'ndisc' and terminated.

Too many errors Server detected too many errors

Total on dd/mm Total number of kilobytes transferred since the first time a message was logged the morning of the specified day.

Txlog OK Transaction log test item has been received by Input Server. This message is not stored in the permanent log.

Unexpected msg 'X' The message 'X' on the network is not a 'tape-socket' msg.

Unknown cmd 'X' Server received an unknown command 'X'

Wait ack Output server is waiting for an ACK.

Wait connect Input server waits for incoming call.


Query Command Result :

The "query" command returns the running parameters, and the following information:

Next poll time Time, if activated, of the next Transaction Log test polling.

Total Data transferred Total number of kilobytes transferred that day. This number is approximate.

Last msgnum Last message number, at the time of the last query and the average number of messages per second since the last query. This count does not include the protocol message.

msgin Total number of messages input to the server. For an input server, this is the number of network messages, including the protocol messages. For an output server, this is the number of tape blocks read from the pipe.

msgout Total number of messages output from the server. For an input server, this is the number of tape blocks written into the pipe. For an output server, this is the number of messages sent on the newtork, including the protocol messages.

curmsg Current message number. During normal operations, the values of 'curmsg' for both servers should be equal. Should they diverge, the input server will log the incident and re-synchronize.

Status Short description of the server current status:
Open pipe
The server is waiting for the associated tape process to open the pipe. This is the quiescent state of both servers when no tape process has opened the associated pipe.

Reading network
The input server is waiting for incoming data from the network. This is the quiescent state of the input server.

Reading pipe
The output server is waiting for data from the associated tape process. This is the quiescent state of the output server.

Wait 1st call
The output server is waiting for the answer to its first call to establish connection. Wait incoming connect. The input server is waiting for an incoming call.

Wait subsequent call
The output server is waiting for the answer to repeated call(s) to establish connection. This is an indication of failure to establish communication with the input server.

Stopped
The server is stopped.


Drain Command :

The "drain" command empties the specified pipe. This command is implicitly executed when starting a server, unless the (C) option is specified or if the Server is linked to the Transaction Logger. Emptying the pipe is sometimes necessary to re-synchronize the processes. The data which is drained out is saved in the file "ts.log,backup" for later processing.


Check Command :

The "check" command is used to send requests from the local Output Server to the remote Input Server. The main purpose is to check the communication link and to do some remote control of the input server. "chk.com" is the command to be executed by the remote input Server. If there is more than one word, it must be enclosed in quotes (eg cmd='exec where 0 (h'):

exec tcl.com Execute the TCL command 'tcl.com' on the remote. This command should a simple one, like 'who' or 'time'. Only the first line of the result is returned.

msgnum Returns the last message number received by the Input Server.

query Query the remote Input Server for its status.

shtdwn Shutdown Input Server. The Input server terminates immediately. If it is linked to the Transaction Log Sub-System, the transaction restore process is aborted and sent back the tape-socket menu. Valid only on version 1.5.0 and later.

test -f fn Test if the file 'fn' exists. If so, a string "1 File 'fn' exists" is sent back. Else a string "0 File 'fn' missing" is sent back. Valid only on version 1.5.0 and later.

test -r{d} fn id Test if the file 'fn' exists, and if the item 'id' is in this file. If so, a string "1 <item body>" is sent back. Else a string "0 File 'fn' missing" or "0 Item 'id' missing" is sent back. If the 'd' flag is present, the item is deleted. Valid only on version 1.5.0 and later.

tlchk Check the Transaction restore on the backup system. Valid only if the Input Server is linked to the Transaction Log Sub-System. Valid only on version 1.5.4 and later. This command makes sure the process doing the transaction restore is in a 'normal' state (attached to the tape, not waiting for input, not in the debugger, etc..), and will make some attempt at correcting the problem (answering to the prompts). This command is executed automatically by the Output Server when a transaction log polling test fails, and by the input Server if it appears that the transaction restore is not emptying the pipe.

tlstrt Restart the Transaction restore on the backup system. Valid only if the Input Server is linked to the Transaction Log Sub-System. Valid only on version 1.5.0 and later.

tron n Turn traces on on the remote. 'n' is the number of traces.

troff Turn traces off on the remote.


System Administrator Messages :

This section lists the messages that may be sent to the Pick users designated in the "notify" parameter, the likely cause and the possible actions to correct the situation:

Communication to host is re-established.
The Output Server succeeded in re-establishing the connection. This message is issued only once.

Input Server stopped due to an error.
The Server encountered a fatal error. Use the "Status" command on the receiver's side to find the last error. This is likely to be due to a serious condition like, the Unix pipe does not exist, or does not have the proper access rights, the TCP port number is already in use, etc...

Network Back On Line.
After a network error was detected, the communication was re-established. This message is sent only once, to indicate the end of a problem.

Network Error. Check Error Log.
A network error was detected. Check the error log to see the cause of the failure. Check the Input Server is up. Use the Unix command "ping <host>" to make sure the remote host is reachable. This message is sent only once, the first time an error is detected.

Network is disconnected. Re-trying to call host
The Output Server failed to establish or re-establish the connection after three attempts. This message is issued only once. Check the error log to see the cause of the failure. Check the Input Server is up. Use the Unix command "ping <host>" to make sure the remote host is reachable.

Output Server stopped due to an error.
The Server encountered a fatal error. Use the "Status" command to find the last error.

Transaction logger problem. Test item not sent.
The Output Server found that none of the Transaction Log Test items reached the remote system. Make sure the communication is up and that the enqueuing of transactions is active. If the transaction log queue is large, it may be because the test items are still in it. If the queue is empty or small, check the transaction logger with the TCL command "txlog". This message is likely to be an indication of a serious problem. This message is issued at every failed attempt until a test succeeds. If this becomes a nuisance, change the polling period, using the menu option "Change TX LOG Polling", in the "Special Operations" sub-menu.

Transaction logger problem. Lost poll.
The Output Server found that one of the Transaction Log Test items did not reach the remote system, even though some test items made it on the other system. This is likely to be a temporary condition, due to a large queue. This message is issued at every failed attempt until a test succeeds. If this becomes a nuisance, change the polling period, using the menu option "Change TX LOG Polling", in the "Special Operations" sub-menu.

Transaction log back on line.
The Transaction Log Test polling resumed its normal operations after an incident was discovered in a previous test. This message is issued only once to indicate the end of the problem.

Transaction logger not attached to tape
This message indicates that the transaction logger was detached from the tape without the Output Server being notified. This was probably caused by using the "txlog" menu instead of the "tape-socket" stop command or menu option. Stop the Output Server and re-start it to correct this situation.

Transaction Restore not Re-Started on Receiver.
This message indicates that the Input Server failed to restart the transaction restore. The process doing the Transaction Restore on the receiver is probably waiting for a user prompt due, most likely, to the stopping of the transaction logger by a mean other than the "tape-socket" menu or command. If the tape has been detached manually from the transaction logger, which can be shown by the "txlog" command, it might be possible to restart the transaction logger from the MASTER system, by selecting the option "Send Command to Remote" in the sub-menu "Special Operations", and send the command "tlstrt" which instructs the Input Server to restart the transaction restore. If this remote command succeeds, then reattach the tape to the transaction logger on the sender side by using the "txlog" menu. If this fails, the System Administrator must act on the receiver's side, by answering whatever question is asked on the transaction restore process (eg "Mount Next reel", or "end" if an abort occurred, etc...). Then re-start the Input Server. The pipe might have to be drained on the receiver's side, using the "Drain Pipe" option in the "Special Operations" sub-menu.

Transaction Restore problem. diagnostics
This message indicates that the Transaction Restore, on the receiving side, is in an abnormal state. This message is issued by the Output Server when, after having detected that a transaction polling test failed, an attempt at correcting the situation failed. This situation will probably require the System Administrator to intervene on the backup system (or use the remote command execution to act on the transaction restore).


Default Server Menus :

Without any option, a menu is displayed. This menu allows operations on the default Server. All optional arguments are set to their defaults, when using the menu. This should suit most configurations where there is only one server, either input or output. When an argument is missing, the user is prompted for it.

Network Tape (1.5.0)

1) List Status 4) Stop Server 7) Show Server
2) Query Server 5) Special Operations 8) Shutdown
3) Start Server 6) Setup Server 9) Other Servers


The "Special Operations" sub-menu allows performing seldom used operations, to test a new installation, for instance.

Network Tape (1.5.0) : Special Operations

1) Turn Trace ON on Server 7) Start Server with NO clear
2) Turn Trace OFF on Server 8) List Permanent Log
3) Change TX LOG polling 9) Clear Permanent Log
4) Change notify user 10) Test transaction Log
5) Drain Pipe 11) List pipes
6) Send Command to Remote

Each option in the menu has some on-line help. See the example below for how to use the menu.
Syntax tape-socket {cmd {keyword=value} {(options}}
Options C Do NOT clear the pipes before starting a server. This option should be used only if data in the pipe should be preserved. This situation normally arises only when the server is stopped in the middle of a communication. Extreme care should be taken when using this option.

Q Quiet. Suppress some user messages and confirmation prompts when stopping servers and draining pipes.

R Show only 'Running' servers in the "status" command

S Suppress the synchronization of clocks at startup time. Valid only on version 1.5.0 and later.

V Verbose. Record all events in the log file.
Example
Hot Backup Setup Example :
Assume a TCP/IP configuration over Ethernet, between two systems. The sender is 
the production system 'PROD' and the receiver a back up system 
'BACKUP'. The two systems are to be setup in a 'hot backup' 
configuration. Both systems are defined in the Unix '/etc/hosts/ file.

'BACKUP' setup:

  - Create a pipe (from Unix):
    mknod /dev/tapein p
    chmod a+rw /dev/tapein

  - Declare the pipe as a pseudo tape in the Pick configuration file of the 
receiver, by inserting the statement:
    tape /dev/tapein 500 c lx

  - Boot the Pick virtual machine on the 'BACKUP' system. You MUST 
have at least TWO terminals connected to the 'BACKUP' systems. One is 
going to be used for the transaction restore, and the second one will be used, 
temporarily, for system administration.

  - Select an unused TCP port number. The list of currently used  port numbers 
can be found, usually, in the Unix file "/etc/services", or by using 
the "netstat -a" command. The number can be anything (>1024 and 
<32767), as long as both servers agree on it. This example uses 3000.

  - Set the default Server by selecting the option 'Setup Server' in 
the menu. Enter the following:
    Server type : in
    TCP/IP port number : 3000
    Protocol : inet
    Unix pipe name : /dev/tapein
    Pick User to notify : bob
    Start transaction logger : on

  - Start the input server on 'BACKUP' by selecting the option 
"Start Server", or by typing, at TCL:
    tape-socket start


'PROD' setup:

  - Create a pipe (from Unix):
    mknod /dev/tapeout p
    chmod a+rw /dev/tapeout

  - Declare the pipe as a pseudo tape in the Pick configuration file of sender, 
by inserting the statement:
    tape /dev/tapeout 500 c lx

  - Boot the Pick virtual machine on the 'PROD' system.

  - Set the default Server by selecting the option 'Setup Server' in 
the menu. Enter the following:
    Server type : out
    Remote HOST name : BACKUP
    TCP/IP port number : 3000
    Protocol : inet
    Unix pipe name : /dev/tapeout
    Pick User to notify : bob
    Transaction Log test polling : 00:10:00
    Start transaction logger : on
    Log (DL) files or ALL : dl
    Transaction log queue period : 3

  - Start the output server on 'PROD' by selecting the option 
"Start Server", or by typing, at TCL:
    tape-socket start

On both systems, list the server activity by selecting the option "List 
Status". Both servers should show a status 'Started'. Query the 
servers by selecting option "Query Server". The Output Server should 
show a status "Reading Pipe" and the Input Server should show a 
status "reading Network". If not, refer to the section 
'Troubleshooting' below.

To check the remote Input Server is active, select the "Special 
Operation" sub-menu, option "Send command to Remote" and, in 
answer to the question "cmd=", type 'query', or, from TCL, 
use the 'check' command:
  tape-socket check cmd=query

The input server should respond with a short message. The 'check' 
command can also be used to do some short commands on the remote input server. 
For instance, to set the date on the remote Input Server (note the usage of 
quotes around the command):
  tape-socket check cmd="set-date 10/06/93"

Stopping a Server :
When attempting to stop a server, a warning is issued if the pipe served by 
this process is not empty. Unless absolutely required, it is not recommended to 
stop the server while data is in the pipe. In addition to this control, 
stopping the Input Server while the Output Server has not been stopped, will 
also issue a warning if the server has been linked to the transaction restore.

Detaching the Tape on the Master system :
On releases prior to 6.1, to be able to use the tape on the master system, it 
is necessary to stop the Transaction Logger. This can be done without any human 
interaction on the backup system. Do NOT use the "txlog" menu to do 
this. Select the tape-socket menu option "Stop Server". This command 
will detach the tape from the transaction logger, make the remote machine aware 
of the fact that the transaction logger is stopping temporarily, and stop the 
Output Server.
To re-attach the tape to the Transaction Logger and restart the data transfer, 
again,  do NOT use the "txlog" menu to do this. Select the 
tape-socket menu option "Start Server". The option will restart the 
Output Server and the Transaction logger process.

Detaching the Tape on the Backup System :
This operation should also be done from the MASTER system, to make sure all 
operations are done in the proper order. It involves stopping the transaction 
logger on the master system, stopping the transaction restore on the backup 
system, and then stopping the Servers. All this is accomplished by the 
'Shutdown" menu option on the MASTER system. After the remote 
shutdown has completed, the process which was doing the transaction restore is 
sent back to the tape-socket menu, after having detached the tape on the backup 
system. The tape can then be used.
To restart the system, first restart the Input Server. Remember that by 
starting the Input Server linked to the Transaction Restore, the process on 
which the "Start Server" menu option is run, BECOMES the process 
which does the transaction restore, thus not freeing the terminal. Then restart 
the Output Server on the Main system.
Note that if it is attempted to stop the Input Server without first stopping 
the Output Server, the Input Server will complain. If the stop command is 
repeated, then the Input Server will stop, even if the Output Server is still 
running.

Other usages :
When not linked to the Transaction Logger Sub-System, the tape-socket Servers 
can be used for a variety of functions. Be sure the Server are setup so that 
they are NOT linked to the Transaction Logger by using the "Setup 
Server" menu option.

- The 'tape-socket' command can also be used to provide remote access 
to a floppy. To achieve this, on the sender's side, start the output 
server using the floppy device name instead of a pipe. The output server will 
start reading from the floppy and write its content over the network into the 
pipe on the receiver's side, which can then do a T-LOAD, for example. 
After the floppy has been sent, stop both servers. It will not handle multiple 
volumes. 

- There is no obligation that the tape process and server be in the same Pick 
virtual machine. One application is to do full file restores across the 
network. To implement this, the receiver's system should have a small Pick 
virtual machine, in which the input server is running. The data it receives 
from the network (the file save), is written into the pipe which is read by the 
real Pick virtual machine doing its file load.
Purpose
Related basic.%socket
tcl.txlog
transaction.logger
tcl.stoplog
tcl.txlog-status
tcl.update-logging
general.tape-socket
tcl.tlog-restore
general.network.save/restore

compile.time.date.stamp.rp

Command compile.time.date.stamp.rp Definition/BASIC Program
Applicable release versions: R83
Category BASIC Program (486)
Description defines the structure of Pick/BASIC object pointers.

When a program is compiled in an R83 release, a pointer is placed in the dictionary level of the file in which the source program resides. This pointer defines where the object code resides and is used whenever the program is run. The structure of the pointer is as follows:

Attr Contents Description

0 item-id Same as the source item.
1 CC Literal "CC".
2 fid "Base" fid of object code.
3 frames Integer number of frames used.
4 nothing
5 time/date The time/date of the compile.

The actual format of the time date is as follows:

hh:mm:ss dd mmm yyyy

The date begins in the eleventh character position, for a length of eleven characters. An attribute-defining item can be placed into the md of the account to obtain the actual compile date, in a form where it can be used by Access, even though it is stored in "external" format. This ADI would appear as follows:


Attr Contents

0 item-id (for our example, assume "PF.DATE")
1 A
2 5
3
4
5
6
7 D2/
8 T11,11]DI (Note that "]" is a value mark)
9 R
10 8

With this item in place, it is now possible to produce a report with Access. See the examples below.
Syntax
Options
Example
SORT DICT BP BY-DSND PF.DATE PF.DATE

This produces a report in which the  most recently compiled programs sort to 
the TOP of the list. This is useful for  length   determining if a particular 
program compiled successfully.

SORT DICT BP WITH NO PF.DATE

This report only affects those items which have NOT been compiled.
Purpose
Related compile.time.date.stamp.ap
tcl.basic
tcl.compile
tcl.run
tcl.pverify
r83.source.files
value.mark
pc.d
pc.text.extract
attribute.defining.items

sib

Command sib Definition/PROC
Applicable release versions: AP, R83
Category PROC (92)
Description used exclusively for holding the item-id's of error messages returned by processes executed by PROC. The PROC "ss" command activates the secondary input buffer.
Syntax
Options
Example
Purpose
Related proc.ss
proc.ri
proc.s
proc
filename.messages
sob
primary.input.buffer
pob
proc.go
proc.if
proc.ih
proc.ip
proc.is