cancel
Showing results for 
Search instead for 
Did you mean: 
cancel
Showing results for 
Search instead for 
Did you mean: 

Community Tip - You can Bookmark boards, posts or articles that you'd like to access again easily! X

Implementer and SQL DDL thoughts/standards?

ptc-4670881
1-Newbie

Implementer and SQL DDL thoughts/standards?

We are on the System i (aka AS/400) using Implementer. We are transitioning from defining our physical and logical files using DDS to using SQL DDL to define SQL Tables and Views. I would like to get some thoughts and opinions from other users of Implementer that are using SQL DDL as to how they structure their SQL DDL script source members within Implementer and promote them through their various environments?

Any input/thoughts would be greatly appreciated.

1 ACCEPTED SOLUTION

Accepted Solutions

From an Implementer standpoint, the process is quite similar, although behind the scenes, things work differently. But if you have been using the optimize flag for DDS file types, the process would be more familiar. SQL requires that the optimize flag be set to Y.

You would still check out your files (Implementer provides all of the SQL file type object codes for you), make your changes in Development and compile there. You just need to change your source. Implementer will automatically handle the drop and the alter table statements for you.

When you promote forward, the process is exactly the same as traditional files. If the compile-required flag is set to Y, Implementer still compiles the tables and other DDL files in the work library. When you do the move however, if the file object exists in the target library, Implementer automatically generates your alter table statement for you and does the CHGPF/CHGLF against the existing file. If the files do not exist in the target, the compiled file in the work library is moved to the target, just as a traditional file would be.

Related object processing is much more enhanced for SQL because you can use techniques such as view within view. The current release 10.2 can handle all of these new SQL relationships for you.

As far as converting an existing DDS file to a DDL table, there is a KnowledgeBase article on our website that gives you the detailed steps on how to accomplish that.

Please let me know if you have any other questions.

View solution in original post

17 REPLIES 17

From an Implementer standpoint, the process is quite similar, although behind the scenes, things work differently. But if you have been using the optimize flag for DDS file types, the process would be more familiar. SQL requires that the optimize flag be set to Y.

You would still check out your files (Implementer provides all of the SQL file type object codes for you), make your changes in Development and compile there. You just need to change your source. Implementer will automatically handle the drop and the alter table statements for you.

When you promote forward, the process is exactly the same as traditional files. If the compile-required flag is set to Y, Implementer still compiles the tables and other DDL files in the work library. When you do the move however, if the file object exists in the target library, Implementer automatically generates your alter table statement for you and does the CHGPF/CHGLF against the existing file. If the files do not exist in the target, the compiled file in the work library is moved to the target, just as a traditional file would be.

Related object processing is much more enhanced for SQL because you can use techniques such as view within view. The current release 10.2 can handle all of these new SQL relationships for you.

As far as converting an existing DDS file to a DDL table, there is a KnowledgeBase article on our website that gives you the detailed steps on how to accomplish that.

Please let me know if you have any other questions.

Thanks for you reply Sharon.

If I am understanding correctly we would do the following:

  1. Create an Implementer checkout with a Code of SQLTABL. The source would be a CREATE TABLE statement.
  2. Promote the checkout and the CREATE TABLE statement would be executed.
  3. When we want to add a column to the table we would check out the SQLTABL and modify the CREATE TABLE statement as desired.
  4. Promote the checkout and Implementer would anlayze and determine if a CREATE TABLE or an ALTER TABLE would need to be performed and take the appropriate action.

Am I understanding the above process correctly?

A few more questions:

5. Should CREATE VIEW statements be seperate checkouts?

6. Should CREATE INDEX statements be a seperate checkout?

7. Implementer would also properly handle any Referential Constraint and/or Primary Key changes relating to the CREATE TABLE statements when changes are made?

You are definitely understanding the steps correctly.

Yes, there are object codes for both views and indexes. They should be separate checkouts, and are treated like traditional logical files, as far as related objects are concerned.

And yes, Implementer can handle referential constraints and primary keys.

maw
4-Participant
4-Participant
(To:smigliorato)

Hello Smigliorato, I cannot find the Knowledge article related converting existing DDS file to DDL tables, can you provide the correct link?

Thanks,

Michael W.

Hi Mike,

we are defining the same process (from DDS to DDL) using always Implementer, it works very fine!

Any thoughts on how to handle indexes? Are you adding indexes to the same SQL script as the CREATE TABLE statement or are you createing seperate SQL script members (checkouts) for indexes?

In my opinion is correct to create an association one item/one check-out. That for a better traceability. Therefore it could be happen to modify the index Key and not to modify the table. So my suggestion is to create a separate Sql script member for every source. Thanks Tommaso

Thanks for the input Tommaso. Do you ever use System i Navigator Index Advisor to create indexes or do you always create them in Implementer and promote them thru the proper environments?

When a table is re-created or altered are all of the indexes reapplied as well or is that something you would need to do as a re-compile or equivilant in Implementer?

Hi Mike,

i suggest you to use always Implementer.

When a table has modified, all indexes/Views/pgm in automatic are pulled as recompiled. It works very fine with *file objects!

at disposition for further info...Tommaso

P.S What Implementer version are you using?

Tommaso,

We hare using version 2010.

When doing some tests we are losing our unique and check contstraints when promoting a table. Do you have any issues with promoting constraints?

Thanks

Hi Mike,

could you send me an example?? What do you mean for "unique and che ckeck constraints"?

Constraints like Primary Key?? Constraints like Foreign Key or integrity check on the column definition??

Thanks,

Tommaso

An example would be as follows:

Create Table CSMLGMSP(

CMEQID bigint generated always as identity

(start with 1, increment by 1) primary key,

MileageVersion for column CMMLVR CHAR(40) NOT NULL DEFAULT '' ,

RecordStatus for column CMRCST CHAR(1) NOT NULL DEFAULT '' ,

CHECK (CMRCST = 'A' OR CMRCST = 'I' ))

RCDFMT CSMLGMSR ;

When it is promoted to the QA environment it does not have the Primary Key constraint or the Check constraint defined. Any thoughts?

Ok...please send me the following print screen:

-QA environment configuration.

-SQLTABL object code.

Probably you need to set any parameter.

In my qa environment all works fine.

Bye,

Tommaso

QA Configuration:

DSPENV.1 Display Environment

Environment . . . . . . . . . : WERCUSQA Name

Description . . . . . . . . . : Customer

Administrator . . . . . . . . : ADMINS Administator

Env type . . . . . . . . . . . : *QAC

Archive

Library defaults: Name Lib owner Obj owner Versions

Program library . : CUSTEST QPGMR QPGMR

Files library . . : CUSTEST QPGMR QPGMR

Source library . . : CUSTEST QPGMR QPGMR

Archive library :

Create Request defaults: Chg

Compile required . . . . . . : Y Y Y=Yes, N=No

Auto submit in create rqs . : Y Y Y=Yes, N=No

Through step . . . . . . . : 4 Y 2=Comp, 3=Dist, 4=Move

Add related objects to rqs . : Y Y Y=Yes, N=No

More...

F3=Exit F8=Object codes F11=Authorities F12=Cancel

F13=Library list F20=User profiles F21=Work env groups

DSPENV.2 Display Environment

Create request options:

Check out required . . . . . : N Y=Yes, N=No

Allow authority overrides . : N Y=Yes, N=No

Environment information:

Special environment . . . . : Standard

Design rqs rqd in check out : N Y=Yes, N=No

Project reference required . : N Y=Yes, N=No

Retain error-free joblogs . : N Y=Yes, N=No

Remove obj in from lib/env . : 1 1=Always, 2=Never, 3=Per obj code

Remove src in from lib/env . : 1 1=Always, 2=Never, 3=Per obj code

Maintain related object info : Y Y=Yes, N=No

Source of information . . : 1 1=Implementer, 2=Pathfinder

Pathfinder X-ref library . : *DEFAULTS Name, *DEFAULTS

Update Pathfinder X-ref . : N Y=Yes, N=No

DOCLIBL for X-ref updates : *DEFAULTS Name, *DEFAULTS, *USER, *STANDARD

More...

F3=Exit F12=Cancel

DSPENV.3 Display Environment

MKS Integrity issue state: Name, *DEFAULT, *NONE

When arrives in this env . . : *DEFAULT

When rejected from this env : *DEFAULT

Remote information:

System . . . . . . . . . . . : HQDEV1

Source library location . . : L L=Local, R=Remote

Compile location . . . . . . : L L=Local, R=Remote

Distribution method . . . . : 1 1=SNADS 2=Tape

3=DVD 4=SDMCOM

7=TCP/IP-FTP 8=TCP/IP-Tape

Remote initiated move . . . : N N=No, Y=Yes

Updates host . . . . . . . : N N=No, Y=Yes

Retain requests on remote . : N Y=Yes, N=No

Promotion notification queue : Name, *NETATR, *LOC

*LIBL *LIBL, Name

Target release . . . . . . . : *NETCONFIG

Bottom

F3=Exit F12=Cancel

The SQLTABL code is as follows:

Create Table CSMLGMSP(

CMEQID bigint generated always as identity

(start with 1, increment by 1) primary key,

MileageVersion for column CMMLVR CHAR(40) NOT NULL DEFAULT '' ,

RecordStatus for column CMRCST CHAR(1) NOT NULL DEFAULT '' ,

CHECK (CMRCST = 'A' OR CMRCST = 'I' ))

RCDFMT CSMLGMSR ;

LABEL ON TABLE CSMLGMSP

IS 'Rand/PCMiler Mileage Source Master' ;

LABEL ON COLUMN CSMLGMSP

(MileageVersion IS 'Mileage Version' ,

RecordStatus IS 'Record Status (A I)');

LABEL ON COLUMN CSMLGMSP

(MileageVersion Text IS 'Mileage Version' ,

RecordStatus Text IS 'Record Status (A I)');

I apologize Mike, but for display object code i mean this:

Object code . . . . . . . . . : SQLTABL SQL Table

Activity flag . . . . . . . . . 1 1=Active, 0=Inactive

Object type . . . . . . . . . . *FILE

Object attribute . . . . . . . . PF

Source member type . . . . . . . TABLE

Default source file . . . . . . QTBLSRC

Creation sequence . . . . . . . 100 1-9999

Special characteristics . . . . SQLTABLES *DATA, *MERGE, *MAINT, ...

Object authority . . . . . . . . *KEEP *KEEP, *GRANT

Remove obj in from env . . . . . Y Y=Yes, N=No

Remove src in from env . . . . . Y Y=Yes, N=No

Creation process . . . . . . . . C C=Compile, M=Move

Archive in MKS Source . . . . . Y Y=Yes, N=No

Creation command . . . . . . . . RUNSQLSTM SRCFILE(£SRCLIB/£SRCFIL) SRCMBR(£S

RCMBR) COMMIT(*NONE) DFTRDBCOL(*NONE)

Below is Display Object Code for SQLTABL:

DSPOBC.1 Display Object Code

Object code . . . . . . . . . . SQLTABL SQL Table

Activity flag . . . . . . . . : 1 1=Active, 0=Inactive

Object type . . . . . . . . . : *FILE

Object attribute . . . . . . . : PF

Source member type . . . . . . : SQLTABL

Default source file . . . . . : WERSRC

Creation sequence . . . . . . : 0100 1-ZZZZ

Special characteristics . . . : *DATA, *MERGE, *MAINT, ...

Object authority . . . . . . . : *KEEP *KEEP, *GRANT

Remove obj in from env . . . . : N Y=Yes, N=No

Remove src in from env . . . . : N Y=Yes, N=No

Creation process . . . . . . . : C C=Compile, M=Move

Archive in MKS Source . . . . : Y Y=Yes, N=No

Creation command . . . . . . . : RUNSQLSTM SRCFILE(#SRCLIB/#SRCFIL) SRCMBR(#S

RCMBR) COMMIT(*NONE) DFTRDBCOL(#FILLIB)

F3=Exit F12=Cancel

Let modify it, adding:

Special characteristics . . . . SQLTABLES.

...and try again! Keep me posted about

Top Tags