Businesses and institutions must collect and store temporal data for accountability and traceability. This paper highlights an approach to dealing with transaction lineage that considers how data can be stored based on timestamp granularities and methods for refreshing data warehouses with time-varying data via batch cycles. Identify three ways transaction lineage can be used and how this is relevant to temporal data. What industries do you think transaction lineage will always be relevant in? How?
Coexistence of Load and Query-Run
During the load process by the stored procedures, the DBMS will use 'write lock', by default, to perform DML during the new cycle refresh. The report SQL can be defined by 'locking for access' lock to retrieve the rows for a specific time slice. Both read and write locks are compatible. The author of this article conducted a test by running a stored procedure which performed update and insert operations on the active table via one database session. The stored procedure updated 19 million rows to expire them and inserted another 19 million rows with new row effective timestamp. At the same time a report query was run repeatedly via another session. The DBA monitored the activities of these two sessions to see if there was any blocking or waiting since both write and read access were occurring on the same active table at the same time. There was no blocking or waiting found as the DBMS 'write lock' and 'lock for access' locks were compatible.
In order to make sure that report users see consistent data they will be provided business views with last data refresh. Every time all related subject area refresh for a particular application is completed, the views will be re-compiled with new timestamps or the views will be pointed to a metadata table, via a join condition, to get the most recent refresh timestamp and use it as filters in report queries.