THE CONTINUOUS AUDIT OF ONLINE SYSTEMS
18 pages
English

THE CONTINUOUS AUDIT OF ONLINE SYSTEMS

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
18 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

THE CONTINUOUS AUDIT OF ONLINE SYSTEMSMiklos A. VasarhelyiAT&T Bell Laboratories, 600 Mountain Ave., Murray Hill, N.J.07974 and Rutgers University, Newark, N.J. Fern B. HalperAT&T Bell Laboratories, 600 Mountain Ave., Murray Hill, N.J. 07974.FootnotesSubmitted to Auditing: A Journal of Practice and TheoryAugust 1989Revised June 1990The authors wish to thank the two anonymous reviewers for their constructive comments and the editor for his review of the manuscript. We would also like to thank the participants of research seminars at Columbia University, Rutgers University, the University of Kansas, the University of Nebraska, and Boston University and the attendees of the EDPAA, IIA, and AICPA professional meetings for their comments and suggestions. We are particularly indebted to Sam Parker, Chris Calabrese, Tsyh-Wen Pao, John Snively, Andrew Sherman, and Kazuo Ezawa for their work on the prototy pesystem. ABSTRACT The evolution of MIS technology has affected traditional auditing and created a new set of audit issues. This paper focuses on the Continuous Process Auditing System (CPAS) developed at AT&T Bell Laboratories for the Internal Audit organization. The system is an implementation of a Continuous Process Audit Methodology (CPAM) and is designed to deal with the problems of auditing large paperless database systems. The paper discusses why the methodology is important and contrasts i twith the traditional audit model. An ...

Sujets

Informations

Publié par
Nombre de lectures 14
Langue English

Extrait

THE CONTINUOUS AUDIT OF ONLINE SYSTEMS Miklos A. Vasarhelyi AT&T Bell Laboratories, 600 Mountain Ave., Murray Hill, N.J. 07974 and Rutgers University, Newark, N.J.  Fern B. Halper AT&T Bell Laboratories, 600 Mountain Ave., Murray Hill, N.J. 07974. Footnotes Submitted to Auditing: A Journal of Practice and Theory August 1989 Revised June 1990 The authors wish to thank the two anonymous reviewers for their constructive comments and the editor for his review of the manuscript. We would also like to thank the participants of research seminars at Columbia University, Rutgers University, the University of Kansas, the University of Nebraska, and Boston University and the attendees of the EDPAA, IIA, and AICPA professional meetings for their comments and suggestions. We are particularly indebted to Sam Parker, Chris Calabrese, Tsyh-Wen Pao, John Snively, Andrew Sherman, and Kazuo Ezawa for their work on the prototype system.
ABSTRACT  The evolution of MIS technology has affected traditional auditing and created a new set of audit issues. This paper focuses on the Continuous Process Auditing System (CPAS) developed at AT&T Bell Laboratories for the Internal Audit organization. The system is an implementation of a Continuous Process Audit Methodology (CPAM) and is designed to deal with the problems of auditing large paperless database systems. The paper discusses why the methodology is important and contrasts it with the traditional audit model. An implementation of the continuous process audit methodologyis discussed. CPAS is designed to measure and monitor large systems, drawing key metrics and analytics into a workstation environment. The data are displayed in an interactive mode, providing auditors with a work platform to examine extracted data and prepare auditing reports. CPAS monitors key operational analytics, compares these with standards, and calls the auditor’s attention to any problems. Ultimately, this technology will utilize system probes that will monitor the auditee system and intervene when needed.  INTRODUCTION  This paper develops the concept and explores key issues in an alternate audit approach called the Continuous Process Audit Methodology. The paper focuses on an implementation of this methodology, the Continuous Process Audit System, developed at AT&T Bell Laboratories for the AT&T Internal Audit Organization.
The paper is divided into four sections. In the remainder of the Introduction, changes in Management Information Systems (MIS) that affect traditional auditing are discussed. In the second section, the continuous process audit methodology and CPAS system are described and contrasted with the current audit model. The following section examines auditing and knowledge issues related to CPAM. The last section presents conclusions and suggests paths for future work. “Current Technology, Forthcoming Technology, and the Auditor”  Traditional auditing has changed considerably in past decades, primarily as a result of changes in the data processing environment. [Roussey, 1986 ; Elliot, 1986; Vasarhelyi and Lin, 1988; Bailey et al., 1989]. These changes have created major obstacles in performing the auditing and attestation function. These changes and the technical obstacles created for auditors as a result of these changes are discussed in Table 1. -----------------------Insert Table 1 here ----------------------- For example, 1) the introduction of technology made auditors not directly able to read data from its source (magnetic tape) and, unlike paper and indelible ink, this source could be modified without leaving a trace. (phase 1 in Table 1), 2) the advent of time sharing and data communications have allowed continuous access to data from many locations (phase 3) creating access exposures, and 3) database systems have added more complexity to auditing due to the lack of obvious mapping between the physical and logical organization of data (phase 4). Auditors dealt with these changes by 1) tailoring computer programs to do traditional audit functions such as footing, cross-tabulations and confirmations, 2) developing generalized audit softwares to provide information on data files, 3) requiring many security steps to limit logical access in multi-location data processing environments and 4) developing specialized audit computers and/or front-end software to face the challenge of database oriented systems. However, MISs continue to advance in design and technology.Corporate MISs, and particularly financial systems of the future will [Vasarhelyi and Yang, 1988]: be more decentralized process data at different levels (microcomputers, minis and mainframes processing data seamlessly), capture data close to the economic event (and be increasingly paperless), benefit from common editing facilities, have localized report generation facilities, have continuous audit monitoring with mechanisms that allow for transaction tracing at any level of aggregation, post transactions close to real-time to live or “shadow” files, present daily “closing” of books with financial conditions and accurate position of liquid assets on a national and multinational basis. These changes will cause additional obstacles for auditors and require further evolution in audit tooling and methodology. Table 2 relates some of these emerging MIS trends to audit issues.
-----------------------Insert Table 2 here ----------------------- For example, three level processing allows a user to access data (ie. from a financial system) from a mainframe or mini computer on their personal computer via a network.Such access raises synchronization issues (i.e. betweenthe data the user downloaded to a spreadsheet and the data on the actual database)and security issues. Similar reasoning can be extended to the other MIS trends in Table 2and are summarized under “audit issue”. The same advanced technologies that create audit problems can be used by auditors to perform the audit function. For example, Cash et al. [1977] examine techniques that can be used to audit Accounting Information Systems. Other examples of these technologies are the use of advanced workstations [Wolitzky, 1985] and decision support systems [Alter, 1980] that incorporate analytic tools and expertise [Bailey et al., 1987] to beused on top of the corporate information system. CPAS is an example of the use of these technologies and is presentedlater in this paper.  “The Current Environment for Large Applications”  Many large applications today will typically use one type of Database Management System (i.e. IBM’s IMS) spread among several databases that relate to different modules of a system. Data may be kept in several copies of the database with identical logical structures and may be processed at the same location and/or in many locations. These systems can typically support both online and batch data processing and are linked to a large set of related feeders. Most large corporate systems today have a set of interconnected applications. For example payroll and accounts payable serve as upward feeders for the general ledger which in turn feeds down stream modules like the corporate financials and costing acting in asynchronous patterns feeding transactions and receiving adjustments and responses from the main system. Additionally, the main system can be the information base for downstream systems supporting management decisions and operations.  This system may store a related family of databases including the master database, a transaction database, a pending transaction database, a control database, and an administrative database. The DBMS typically will have its own software for resource accounting and restart-recovery facilities, a query language, a communication interface, a data dictionary, and a large number of utility packages.  In many corporations, system software consists of different systems with a large majority of the systems still operating in mainframe computers, programmed in traditional programming languages, and interfacing primarily with mainframe-based databases. System hardware is a mix of different technologies with bridges among different standard environments, including microcomputers acting as feeders and analysis stations, large mainframes, a large number of telecommunication interfaces, middle size system buffers, and large data storage devices.
 Copies of system software may be distributed among different sites with center-oriented as well as center-divergent information flow. Data can be transmitted at the burst mode (accumulated by or for batch processing) as well as in an intensive flow (where data is entered when a transaction is measured and not accumulated for transmission) for online or close-to-online processing mode [Fox and Zappert, 1985].  Figure 1 illustrates an example of this type of system. Flows are received from large batch feeders, data are created through continuous feed by automatic processors, databases are queried and updated, output files feed other systems and paper output is created for distribution and mailing. Financial reports are generated and outputs are fed to corporate general ledgers and corporate information systems. -----------------------Insert Figure 1 here ----------------------- In large corporations, each of the feeder boxes is an independent system with its own databases, user-query, data capture, and up and down-stream feeders. Auditing these systems requires both the audit of the system itself as well as the examination and reconciliation of the interfaces between systems. These interfaces, the error-correction, and overhead allocation loops pose additional problems to systems audit. Table 3 displays some of the characteristics of database systems and two evolutionary audit techniques (labeled level 1 and level 2) that can be used to evaluate and measure these systems. -----------------------Insert Table 3 here -----------------------Audit work on these systems is constrained by strong dependence on client system staff (for the extraction of data from databases) and typically entails reviewing the manual processes around the large application system. In traditional system audits these procedures were labeled as “audit around the computer”. These procedures, are described above as Level 1 and are characterized by examination of documentation, requests for user query of the database, examination of application summary data, sorting and listing of records by the user (not the auditor), a strong emphasis on paper, hysical evaluation of security issues, plan analysis for the evaluation of restart & recovery and manual reconciliation of data to evaluate application interfaces. Level 2 tasks, described in Table 3, would use the computer to perform database audits as well as eliminate the intermediation by the user (auditee) in the audit of database systems. This hands-on approach utilizes queries to the data dictionary, direct use of the system by the auditor and would rely on transaction evidence gathered by the auditor using the same database technology. The differences in desired audit approach and the technological tooling necessary for performing level 2 tasks led to the development of some of the concepts used for
Continuous Process Auditing.  CONTINUOUS PROCESS AUDITING There are some key problems in auditing large database systems that traditional auditing as well as the traditional EDP process cannot fully solve. For example, given that traditional audits are performed only once a year, audit data may be gathered long after economic events are recorded. This often is too late to prevent economic loss. Traditionally the attestation function has not been relevant in the prevention of loss. However, internal auditors have increasingly been asked to assume a much more proactive role in loss prevention. Another problem is that auditors typically receive only a “snapshot” of a system via several days of data supplied by the auditee. Unless these data coincide with some sort of problem in the system the data may not be a good indication of system integrity. Surprise audits are seldom effective in this kind of environment and compliance is difficult to measure because major and obtrusive preparation is necessary in the “around-the-computer audit of systems.  In Continuous Process Auditing, data flowing through the system are monitored and analyzed continuously (i.e., daily) using a set of auditor defined rules. System alarms and reports call the auditor’s attention to any deterioration or anomalies in the system. Continuous Process Auditing then,is really an analytical review technique since constantly analyzing a system allows the auditor to improve the focus and scope of the audit. Furthermore, it is also often related to controls as it can be considered as a meta form of control (audit by exception) and can also be used in monitoring control (compliance) either directly, by looking for electronic signatures, or indirectly by scanning for the occurrence of certain events. The accounting literature has suggested other forms of supplementing traditional control techniques by creating a formal methodology of internal control representation and analysis [Bailey et al., 1985; Bailey et al., 1986) or by using the entity-relationship approach [McCarthy 1979, 1982] to represent accounting events. Ultimately, if a system is monitored over time using a set of auditor heuristics, the audit can rely purely on exception reporting and the auditor is called in only when exceptions arise. Impounding auditor knowledge into the system means that tests that would normally be performed once a year are repeated daily. This methodology will change the nature of evidence, timing, procedures and effort involved in audit work. The auditor will place an increased level of reliance on the evaluation of flow data (while accounting operations are being performed) instead of evidence from related activities (e.g. preparedness audits). Audit work would be focused on Baudit by exceptionP with the system gathering knowledge exceptions on a continuous basis.  The Continuous Process Audit is contrasted with the Traditional Audit in Table 4. -----------------------Insert Table 4 here
----------------------- Traditional auditing involves the examination of archival data, substantially after the event and emphasizes paper-based evidence. Continuous Process Auditing involves the examination of archival and immediate data, close-to-the-event and use of magnetic recorded data. ”Key Concepts” The placement of software probes into large operational systems for monitoring purposes may imply an obtrusive intrusion on the system and can result in performance deterioration. The installation of these monitoring devices must be planned to coincide with natural life-cycle changes of major software systems. Interim measures should be implemented to prepare for online monitoring. The current CPAS prototype consists of a data provisioning system and an advanced decision support system. Data can be gathered from tailored reports (files) from the application, reports from the application, and direct monitoring data. The approach used in CPAS is dual, evolving from a measurement phase without intrusion and minor system overhead, to a monitoring phase where intrusion is necessary  Intrusion and system overhead may be limited by utilizing database backup and recovery traces as the main source of transaction data, dumping a copy of these traces onto a local workstation, loading the workstation with some expert software and having it as a local interchange device but audit capability is substantially expanded.  MeasurementCopies of key management reports are issued and transported through a data network to an independent audit workstation at a central location.These reports are stored in raw form and data are extracted from these reports and placed in a database. The fields in the database map with a symbolic algebraic representation of the system that is used to define the analysis. The database is tied to a workstation and analysis is performed at the workstation using the information obtained from the database. MonitoringIn the monitoring phase, audit modules will be impounded into the auditee system. This will allow the auditor to continuously monitor the system and provide sufficient control and monitoring points for management retracing of transactions. The level of aggregation and difficulties of balance and transaction tracing that are prevalent in current systems will decrease in the future as processing economies that dictated the limited traceability of transactions will not be needed as systems become more powerful.  The Continuous Process Audit System (CPAS) prototype uses the “measurement” strategy of data procurement. This is illustrated in Figure 2. The auditor logs into CPAS and selects the system to
be audited. The front end of CPAS allows the auditor to look at copies of actual reports used as the source of data for the analysis. From here the auditor can move into the actual analysis portion of CPAS. In CPAS, the system being audited is represented as flowcharts on the workstation monitor. A high level view of the system (called data flow 0-DF level 0 in Figure 2) is linked hierarchically to other flowcharts representing more detail about the system modules being audited. This tree oriented view-of-the-world which allows the user to drill down into the details of a graphical representation is conceptually similar to the Hypertext approach [Gessner, 1990]  The Hypertext approach is not new being traceable to the 1960s work of Ted Nelson. It is currently quite popular due to its implementation in personal computers, its affinity to object-oriented thinking and many implementations both commercial and public domain.  The analysis is structured along these flowcharts leading the auditor to think hierarchically. -----------------------Insert Figure 2 here ----------------------- An integrated view of the system is available at DF level 0. This logical view of the system can be associated to diagnostic analytics that count the number of exceptions and/or alarms current in the system. Detailed information about each main module is available at the lower levels. This type of thinking is similar to “hypertext” conceptualization where symbolic and relational links can be specified across levels.  This information is presented primarily as metrics and analytics.
“Metrics” Metrics are defined as direct measurements of the system, drawn from reports, in the measurement stage. These metrics are compared against system standards. If a standard is exceeded, an alarm appears on the screen. For example, in the auditing of a billing system, the number of bills to be invoiced is extracted from a user report. The number of bills not issued due to a high severity error in the data is captured as well as the total dollar amount of bills issued. These three numbers are metrics that relate to the overall billing process. “Analytics and Alarms” Analytics are defined as functional (natural flow), logical (key interaction), and empirical (e.g. it has been observed that ....) relationships among metrics. Specific analytics, related to a particular system module can be derived from the auditor, management, user experience, or historical data from the system. Each analytic may have a minimum of three dimensions: 1) its algebraic structure, 2) the relationships and contingencies that determine its numeric value at different times and situations and 3) rules-of-thumb or optimal rules on the magnitude and nature of variance that may be deemed as “real variance” to the extreme
of alarms. For example, a billing analytic would state that dollars billed should be equal to invoices received, minus values of failed edits plus (or minus) the change of the number of dollars in retained invoices. The threshold number of expected invoices for that particular day or week (allowing for seasonality) must be established to determine whether an alarm should be fired. Actual experience with these issues indicates that several levels of alarms are desirable: 1) minor alarms dealing with the functioning of the auditing system, 2) low level operational alarms to call to the attention of operating management, 3) higher level alarms to call the attention of the auditor and trigger “exception audits” and 4) high level alarms to warn auditing and top management of serious crisis. Establishing these alarm thresholds is a second harmonic development. The data and experience needed to understand the phenomena being measured to the level of specification of alarm standards are probably not available in most organizations. Experience with a CPAS-like system will aid in their development. “Software Implementation” Figure 3 was prepared using CPAS The CPAS software was implemented under a NeWS windowing system and a SUN workstation. The entire software was constructed using standard UNIX tools with a minimum of low-level programming. A commercially available relational database was used in the delivery device. The concept, however, can be extended and can be implemented piece by piece using standard PC tools. Conceivably, the methodology can be implemented in many different ways, from a pure PC implementation to a full-fledged distributed computing solution with the “audit computer” as the self-contained destination of monitoring/measurement data.  and has the the look-and-feel of any CPAS application. It shows a high-level view of a theoretical billing system. The hierarchy window on the left in the figure indicates what part of the billing system is represented by the flowchart. In this example, the flowchart represents the base node of the billing system hierarchy, i.e., an overview of the system. This node is called “Overview” in the hierarchy window. The auditor can use the hierarchy window to move to any flowchart in CPAS by simply selecting the desired node.
------------------------Insert Figure 3 Here ------------------------The billing system consists of six major modules: Process Transactions, Process Errors, Customer Billing, Payments, Treatments and Journals, Customer Inquiry, and Process New
Orders. Billing data first enters the Process Transaction module where high level edits are performed. Any errors from this process are sent to the Error Processing module. Corrected errors are sent back through the front-end of the system. Transactions that successfully pass through the front-end are sent to the Billing module where customer accounts are extracted, amount due calculated, and the bill produced. Errors from this process are sent to the Error Correction module. Billing information is sent to the Journals function. Payment and treatment information is processed here, and the customer database is updated. The system also contains a module that deals with any questions a customer may have about his/her account and a module that processes new orders for service. The alarm report (Figure 4) at this level states that there are three alarm conditions outstanding in the system on 4/1/89. There are ten accounts out-of-balance in the billing module, 2000 errors were sent to the error module and the dollar value of the error file has exceeded the standard. ----------------------Insert Figure 4 here ----------------------
The auditor may wish to look at the Customer Billing module in more detail to investigate the out-of-balance condition. The auditor would use a mouse and select the Billing node in the hierarchy window. This would cause the branch to the selected node to be highlighted and a new flowchart representing the Customer Billing module would appear on the workstation monitor. This is illustrated in Figure 5. The date bar in the figure indicates the date the analysis uses as the base date. Here the metrics, indicated as boxes next to the flow chart, show the flow of accounts through the Customer Billing module on 4/1/89. At this, or any level of the system, the auditor can chose to look at alternate metric dimensions (i.e. transactions, records), if appropriate. Additionally, if multiple copies of the software exist in different locations, the auditor can chose what level of aggregation he or she is interested in. These metrics are used to perform a reconciliation and different modules would have different metrics associated with them. The alarm (found on the lower left of the figure) indicates that there were ten accounts lost between the Format Bill module and the Print Bill module on this date. This is the same alarm earlier mentioned and relates to an analytic, impounded into the system, in the form of a reconciliation equation that is out of balance. This reconciliation is performed automatically with a frequency equal to the report generation so the auditor can monitor how often the reconciliation fails.  ----------------------Insert Figure 5 here ---------------------- 
The auditor may wish to look at the history of the reconciliation. Figure 6 is a three level time-series showing the number, total value, and percent of accounts lost for a three week period ending 4/1/89. ----------------------Insert Figure 6 here ---------------------- It appears from the graph, that the out-of-balance condition has been occurring sporadically for quite some time. This could indicate inadequacy or poor compliance with internal controls.
More detailed analytics and metrics relating to the actual billing process and the interface between this module and other modules in the system are found at the different levels. This information, taken together, presents an integrated diagnostic view of the system being audited.  ”Text”, explaining the flowcharts and “Help”, explaining how to use the system, are available at each level. The auditor can print out screens, reports, or graphs at any time for writing his/her audit reports.  Complementing the actual hands-on audit work is an auditor platform, accessible at any level, which can include a series of different functions. This platform should ultimately contain at least a statistical package, a graphics package, a spreadsheet package including a filter to the database), a report generator, and a text editor. These tools can be used for ad hoc analysis or be linked to the ”wired-in” procedures in CPAS. An even richer technological environment may incorporate specific audit document preparation tools that use high technology hardware to read and interpret printed materials [Kahan et al., 1986] and large amounts of information can be stored and accessed directly using optical disk (WORM) technology.  Many firms (e.g. Imnet Corporation, Teletrak Advanced Technologies Systems Inc.) are developing document image technology to access large optical data storage devices.  The CPAS technology and software base and the potential set of tools associated with it must be considered in conjunction with the auditor, the auditor’s environment, and the auditor knowledge base.  AUDITOR AND KNOWLEDGE ISSUES The set of analytics and heuristics used in CPAS will ultimately include a wide variety of algorithms ranging from flow-based rules to expert algorithms drawn using techniques in knowledge engineering. These algorithms will be used both in the auditor platform, as analytical supplements, as well as impounded into software probes in the monitoring stage.
Expert systems techniques have been examined by several auditing researchers [see Kelly et al, 1988] as well as implemented on a limited basis dealing with certain tax (tax accruals) and financial accounting issues (e.g. bank loan portfolio estimation) in practice [Hansen and Messier, 1987; Vasarhelyi, 1988]. Audit knowledge is needed to supplement the simple comprehension of the system being audited and to deal with the very complex stage of data gathering, analysis and knowledge organization [Buchanan and Shortliffe, 1984] necessary for programming the auditing probes.
The CPAS prototype was tested on two very large financial systems. The first application of the CPAS technology was an evolving system whose features changed rapidly. The idea was to put a prototype in place that contained basic analytics and then work with the auditors, as they used CPAS, to build more expertise into the system. The audit knowledge elicitation process was to focus in three areas:  Archival Recording: Interviews with auditors and examination of working papers and audit reports for identification of current audit steps, items of data being examined, specific rules concerning required audit evidence; and any actual procedures of data gathering, search and analysis. This process is analogous to the work that tries to establish descriptive models of auditorbehavior. For example “think aloud” techniques [Biggs and Mock, 1983] provide some insight on the auditor’s thought processes.  Heuristic Discovery: Application of knowledge engineering techniques to identify non-formulated rules, desired tooling, types of inference, methods of fuzzy set resolution, etc. (Shimura and George, 1973; Shank and Abelson, 1977; Hayes-Roth, 1978)  Methodological Development: Working with auditors to further develop the “Continuous Process Audit” methodology, monitoring the usage of the auditor workstation in the measurement phase, and impounding more audit expertise into the audited system. [Shaw and Simon, 1958; Simon 1973, 1979)  The problem domain in question tended to be one with “diffuse knowledge” [Halper et al., 1989], where a large set of sources of knowledge were necessary and knowledge was ultimately captured from a much wider set of experts than originally conceived. The issue of startup cost to impound the system description into the CPAS platform and the maintenance of the knowledge base became very important. However, the process of knowledge acquisition and recording used under CPAS is not unlike the phases of internal control evaluation and documentation for workpapers that an auditor has to perform. The level of auditor comprehension of the system tends to be deeper under this approach if the auditor (not a system analyst) is to perform knowledge capture.  In the long range much of this work can be linked to the use of CASE type tools were the knowledge is captured at design and could be easily transported, if not directly used, to the
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents