|by Bill Gladwin
An interface is the communication and data integrity between multiple production applications.
The complexity of the interface is compounded by technologies as well as business practices. As business functions become more integrated, so must the
applications which support those functions. As applications develop, so does data redundancy. Managing that redundancy is key in any interface methodology.
Another fact that has to be decided is, what application “OWNS” or is responsible for the validity of the data. Applications that “OWNS” data elements have a
fundamental requirement to share the maintenance of that data across systems eliminating redundant maintenance.
There are also events within an application that trigger events on other applications such as a purchase order receipt causing an accounts payable transaction, or
the shipment of product causing an accounts receivable transaction. These cross-system events are other examples of the communication of data between multiple
production applications. The use of multiple hardware platforms both locally and remotely add another level of complexity when communicating between these
We also have to consider the availability of the data we need to access.
The availability of an application for interfacing, could also be considered a separate interface within itself.
Lastly I will cover the Error Processing. I do want to emphasize, that I feel this should not be an elaborate process. I have been in many meetings that have
address this issue. Development of an Error Process is necessary but should not be dwelled upon. If an interface error does occur, it is usually from a previously
failed interface or an oversight from the initial interface design. When the error does occur, GO BACK to the original problem and fix that also. This will eliminate
future interface errors. Eventually all interface errors will be handled correctly and the elaborate Error Processing will no longer be needed.
The following represents the types of interfaces we employ.
1) Availability Interfacing
2) Real Time Interfacing
3) Asynchronous Communication Interfacing
4) Batch Interfacing
|Availability Interfacing: Back to Types
The first thing to consider is an Interface Library or Area being available to all Applications, User profiles, and Servers. Security level for these files will have to be
researched. This Interface Library or area will be independent to all applications. Interface Files will reside in this Library for the Asynchronous and Batch
The availability should be verified with an external switch or process. These switches should be made available to all applications at all times. The availability
switch should represent multiple levels of availability.
The reason an application might not be available could be because of exclusive access, file backup, or another process running needing the data to remain in a
frozen state. For example, MPR need a snapshot of the existing inventory and remain unchanged until the full MRP cycle is complete.
The levels of access can help resolve some of these issues. Access availability should be at least to the level of:
Security is also an issue of availability. The security issue will be addressed in each of the following interfaces.
|Real Time Interfacing: Back to Types
A real-time interface is defined as the immediate access or update of data elements residing on another application’s database. This process is probably the most
preferred and the least preferred. The challenge and obstacle you are faced with is the availability of data and the security level.
Keep in mind that you are dependent on the availability of the application.
For a read only access, the potential of a “No Access” status may be turned on. If you need “Update access” and only “Read access” is allowed, at the time, you
cannot finish the session successfully. We now have a data integrity issue.
Security for a Real Time Interfacing can be high maintenance. If allowed to access a different application data, every user profile needing to Real Time Interface
will have to be set up on the Requesting Applications side.
Also for cross server access, each users profile has to be set up. Not only possibly hundreds of users will have to be initially set, any new user added to the
legacy system will also have to remember to be added to the Requesting Application.
I have seen generic user access set up to resolve this issue, but the user level security can not be taken advantage of. This option is greatly opposed by auditors.
Another form of Real Time Interfacing is Data Mirroring. There is software out there that does this. Particular fields can be mapped to another severs
application. A consideration for this is the communication line availability. In some instances, the owning application can be slowed down waiting for the
mirrored application to be updated. Also consider the necessity of the mirroring. Will a file being maintenance on the owning application need to mirror all its
activity? Maybe not. That update of the vendors preferred shipping method is not even a data element on the other servers application. Yet the update of that
vendor, invoked an unnecessary communication to the data mirroring . My input on Data Mirroring is from discussions with a few Technical Support personnel.
There might be other options for this process that I have not heard of. If you do consider Data Mirroring just ask about these few points before purchasing the
|Asynchronous Communication Interfacing: Back to Types
Asynchronous Communication Interfacing (ACI) can be considered as an event driven interface. ACI is a near real time interface. This method of interfacing gives
the feel of a real interface. Using the proper utilities on the AS/400 and setting up the interface transaction file on the correct environment, is key to an efficient
interface. There are 2 major consideration that should be addressed. How will the “Owning” Application be affected by the interface and how is the Receiving
Application going to be affected? With the proper setup, you can have a sub-second to a couple of seconds completion time. A minor consideration is the audit
trail of the transaction. Though minor, if set up correctly, you can have a good Disaster Recovery and Problem Analysis data base.
The Interface Transactions file(s) will be holding the interfaced data. Some interfaces have only used one file with many layouts. I’m an advocate of multiple
interface files. The following design description is with the assumption of multiple interface files. One per interface.
The Interface file is broken up into two segments. The Standard Front End and the Rest of the Interfaced data. The Standard Front End can look something like
Unique key = to Process Date, Process Time, Created by Job #, and a Unique Sequence Number.
For Application Code
Process Date and Time
Error Code or Description
I would also suggest:
Created by Server
Created by User
Created by Application
Created by Process
This much data on every interface record?? I get this question all the time. Please stay with me while I explain the purpose/benefits of this Standard Front End.
Unique Key – This is designed not only to be unique but also used for the proper execution sequence. The Unique Sequence is a tie breaker for transactions
that may need more than one record written at the time the enter key it hit or if a batch process using this layout creating multiple records at the exact time stamp,
which I will address in the Batch Interface Processing Segment.
For Appl Code – This code will be the key to the Receiving Application
Process Code – This will be used as a selection criteria for the Receiving Application. As Transactions are processed by the Receiving Application, this will change
from an (U)nprocessed status to a (P)rocess or (E)rror status. You can be even more elaborate and have multiple levels of processing statuses. The point of this is,
the Receiving Application will only have a logical view of (U)nprocessed records.
Process Date and Time – This will initially be sent as an initialized field(s) As Transactions are processed by the Receiving Application, the actual process date and
time will be updated here. I prefer to separate the date and time as two fields.
Error Code or Description – As I had mentioned before try not to put too much effort into this. I will say that I have seen some elaborate Error code processing.
Using this field as a Partial Key to multiple Error messages. Since only one code or description can fit, it does not handle multiple errors that might occur. For
example Vendor might not be valid and the country not valid on the same transaction. The partial key approach is designed to have an error file, holding the
multiple errors, all with the same exact partial key. This works quite well but to design an elaborate Error Process might not be cost justified.
The minimum required fields, and the other fields, Created by Server, Created by User, Created by Application, and Created by Process are all used for Problem
Analysis. This bring me to another point. As you might have figured, Interface Records are not immediately deleted. They are rewritten with a different
processing code, giving a logically deleted effect. I was involved in a situation where an application had to be restored. The recovery, for the interfaced data was
a very simple process. Using the Process Date and Time, all records that were process were simply changed to an (U)nprocessed Status. Disaster recovery took
less than a minute for the interface transactions. The other fields are used to identify Who, What, When and Where. As you can see the Standard Front End can
be invaluable. This must be thought about, critiqued, and finalized before you start any Standard Interface. Once you define the standard and start the
development, it is quite costly to go back and change it.
How will this all work now? This design is mainly for interfacing to an AS/400 or multiple AS/400’s on the same network. If you need to interface to another
AS/400 Platform, I recommend using DDM (Distributed Data Management) as access to the Interface file. If you are not interfacing to an AS/400 or do not want
to use DDM, then Batch Interfacing may be your choice. The intent is to have the “Owning “ or Sending Application write an Interface Transaction to the
Interface File. If you are using DDM, the Interface File should reside on the Sending Applications Server. If communication is down it will not affect the Sending
Applications process. The other factor to this Asynchronous Process is having the Interface File triggered and a Data Queue Program running. Once the
transaction is written to the file, the Standard Trigger Program will read the Standard Front End and write a Data Queue entry. This data queue entry will call the
appropriate interface program to process. This process will then read all (U)nprocessed records usually one the one just written. If more than one record is
written around the same time to the Interface File, the first trigger transaction record will run the Receiving Process and keep reading all logical records to be
processed. The other data queue entries will just start and end the interface program because the first data queue entry already processed their transactions. For a
more detail description click the Asynchronous Detail Document.
|Asynchronous Detail Document 36.5 K
A batch interface is defined as an interface that is run on a predetermined scheduleBatch interfacing.
A batch interface is defined as an interface that is run on a predetermined schedule. The “owning” or sending application will create a batch file for the
receiving application to process.I plan to expand on this subject.
Please Click on the Batch Process - Any Server/Application to AS/400 or the Batch Process - AS/400 to Any Server/Application
For a more detail description of this subject.
|Batch Process - Any Server/Application to AS/400 142.4 K
|Batch Process - AS/400 to Any Server/Application 76.8 K