On Line Analytic Processing (OLAP)
Informally, OLAP can be defined as the interactive process of creating, managing and analyzing data, as well as reporting on the data this data being usually perceived and manipulated as though it were stored in a multidimensional array.
Note : The term OLAP appeared first in a white paper written for Arbor Software Corporation in 1993, though the concept is much older (as is the case for the term “data warehouse”).
OLAP requires almost invariably data aggregations, usually in many different ways. Examples of such aggregations are :
Q1 : total sales by store (over a specified period)
Q2 : total sales by city (over a specified period)
Q3 : average sales by city and category of products (over a specified period)
The question is what kind of data model is most appropriate for formulating easily (and answering) queries that almost invariably involve data aggregations. Unfortunately, no generally accepted model has emerged so far. While waiting for such a model to emerge, many big companies build data warehouses in ad hoc ways, mostly based on what the big software houses offer, under such names as the following :
ROLAP : Relational OLAP
MOLAP : Multidimensional OLAP
HOLAP : Hybrid OLAP
However, none of these approaches is satisfactory, as none of them offers a clear separation between the physical and the conceptual level (in the way this is done in the relational model).
Nevertheless, the fact is that most big companies realize today that “intelligence = money”,
and do build data warehouses anyway. As a result, the data warehouse market is one of the
fastest growing markets : from 2 billion dollars in 1995 to over 8 billion in 2003!
SQL Extensions for OLAP
As we mentioned earlier, OLAP requires almost invariably data aggregations, and SQL does support such aggregations through its Groupby instruction. However, during OLAP, each data analysis session usually requires repeated aggregations of similar type over the same data. Formulating many similar but distinct queries is not only tedious for the analyst but also quite expensive in execution time, as all these similar queries pass over the same data over and over again.
It thus seems worthwhile to try to find a way of requesting several levels of aggregation in a single query, thereby offering the implementation the opportunity to compute all of those aggregations more efficiently (i.e., in a single pass). Such considerations are the motivation behind the currently available options of the Groupby clause, namely,

Grouping Sets

Rollup

Cube
These options are supported in several commercial products and they are also included in the current versions of SQL. We illustrate their use below, assuming the table SP(S#, P#, QTY), whose attributes stand for “Supplier Number”, “Product Number”, and “Quantity”, respectively.
The Grouping sets option allows the user to specify exactly which particular groupings are to be performed :
Select S#, P#, Sum(QTY) As TOTQTY
From SP
Group By Grouping Sets ((S#), (P#))
This instruction is equivalent to two Groupby instructions, one in which the grouping is by S# and one in which the grouping is by P#.
Changing the order in which the groupings are written does not affect the result.
The remaining two options, Rollup and Cube, are actually shorthands for certain Grouping Sets combinations. Consider first the following example of Rollup :
Select S#, P#, Sum(QTY) As TOTQTY
From SP
Group By Rollup (S#, P#)
This instruction is equivalent to (or a shorthand for) the following instruction :
Select S#, P#, Sum(QTY) As TOTQTY
From SP
Group By Grouping Sets ((S#, P#), (S#), ( ))
Note that, in the case of Rollup, changing the order in which the attributes are written affects the result.
Finally, consider the following example of Cube :
Select S#, P#, Sum(QTY) As TOTQTY
From SP
Group By Cube ( S#, P# )
This instruction is equivalent to the following one :
Select S#, P#, Sum(QTY) As TOTQTY
From SP
Group By Grouping Sets ((S#, P#), (S#), (P#), ( ))
In other words, the Cube option forms all possible groupings of the attributes listed in the Groupby clause.Therefore, in the case of Cube, changing the order in which the attributes are written does not affect the result.
We note that, although the result of each of the above Groupby options usually consists of two or more distinct answertables, SQL bundles them (unfortunately) into a single table, using nulls.
We also note that OLAP products often display query results not as SQL tables but as cross tabulations of SQL tables. The cross tabulation of a SQL table is a multidimensional table indexed by the values of the key attributes in the SQL table and in which the entries are the values of the dependent attributes. Cross tabulation  together with various visualization techniques  is especially useful for producing reports out of query results, and several report generating tools are available today in the market.
Finally, we note that the decision making process in an enterprise usually requires a number of reports produced periodically from the answers to a specific, fixed set of queries (such as monthly average sales per store, or per region, etc.). Such queries are usually called “continuous queries” or “temporal queries”. It is important to note here that a temporal query does not change over time; what changes over time is the answer to the query and, as a result, the report produced from the answer changes as well.
The Dimensional Model
In this section we present a model for data warehouses, called the dimensional model. Following this model, a data warehouse operates from a socalled star schema over which one formulates OLAP queries.
Star Schema
We start with a set of attributes U, which represent the properties of interest in an application. Each attribute A is associated with a set of values, or domain, denoted as dom(A).
A star schema over U (or simply a star over U) is a labelled acyclic graph satisfying the following properties (see also Figure 2):
Nodes :
 There is only one root consisting of a nonempty set of attributes
 Each node other than the root consists of a single attribute.
Arrows :
 There is one arrow RD for each D in R
 There is no path between attributes of R
 There is at least one arrow RM, where M is not in R
The root R is called the origin or the centre of the star and the attributes of R are called the dimensions of the star. We shall assume that the domain of node R is the Cartesian product of the domains of its attributes.
A path beginning with an arrow RD, where D is a dimension, is called a dimensional path and each node in the path is called a level of D, D itself being the base level.
A path beginning with an arrow RM, where M is not in R, is called a measuring path, M is called a measure and each node in the path is called a level of M, M itself being the base level.
Note: We shall assume that a star is a connected graph. We shall also assume that all labels are distinct, thus we might allow a star to have “parallel” arrows.
A star database (or simply database) over a star schema S is a function that associates:

each arrow RD, where D is a dimension, with the projection of dom(R) over D;

and each other arrow f: XY of S with a function (f): dom(X)dom(Y).
We shall often use the term “function f” to mean (f).
Note: Each function f of a star database can be given either extensionally, i.e., as a set of pairs , or intentionally, i.e., by giving a formula that allows to compute f(x). For example, referring to Figure 2, the function from R to Sales will be given extensionally, whereas the function from Date to Month will be given intentionally: dd/mm/yy mm/yy.
A Functional Algebra
The functions stored in a star database represent information about the application being modelled. By combining these functions we can derive new information about the application from the information already stored. This is done using the following operations:

projection

composition

pairing

restriction
These operations, which we define below, constitute what we call the functional algebra.
Projection
This is the usual projection function that accompanies a (Cartesian) product of sets. In fact, given the product of n sets, we have 2^{n} projection functions, one for each subset of the n sets. In particular, the projection corresponding to the empty set is a constant function mapping every tuple in the product to the empty tuple (i.e., to the only tuple that can be defined over the empty set).
Note: The product of n sets is always the same (up to isomorphism), no matter how the factors are parenthesized. For example, AxBxC Ax(BxC) (AxB)xC BxAxC etc.. As a consequence, we shall often ignore parentheses or even rearrange them in a convenient manner.
Composition
It takes as input two functions, f and g, such that range(f)def(g), and returns a function gf: def(f)range(g), defined by: (gf)(x)= g(f(x)) for all x in def(f).
Pairing
It takes as input two functions f and g, such that def(f)=def(g), and returns a function fg: def(f)range(f)xrange(g), defined by: fg(x)= , for all x in def(f). Intuitively, this is the tupleforming operation.
Restriction
It takes as argument a function f and a set Y, such that Yrange(f) and returns a function f/_{Y} : def(f)Y, defined by: f/_{Y}(x)= f(x), if f(x) is in Y and undefined otherwise. The set Y can be given either explicitly or using a settheoretic expression. Here are some examples, assuming that the range of f is the set of integers:
Y= {3, 4, 5, 9}, Y= {f(i) / i 9}, Y= Y1\Y2 where Y1= {f(i) / 3 i 9} and Y2= {f(i)/ i5}
Note: We may also consider restricting the domain of definition of the function f, as is usually done in mathematics.
Note that in the functional algebra we have the closure property: both, the argument(s) and the result(s) of each operation are functions. Therefore, as in the relational algebra, we can substitute subexpressions in an expression by their results, i.e., we can use parallelism when evaluating expressions of the functional algebra.
The Path Language of a Star Schema
Each star schema S is associated to a path language, which allows deriving new arrows from those present in S. This language is the set of all path expressions over S. A path expression e over S is defined by the following grammar, where “::=” stands for “can be”, p and q are path expressions, and source(e), target(e) are selfexplanatory:
e::= f, where f: XY is an arrow of S; source(e)=X and target(e)=Y
qp, where target(p)=source(q); source(e)=source(p) and target(e)=target(q)
pq, where source(p)=source(q); source(e)=source(p) and target(e)=source(p)xsource(q)
p/_{Y}, where Ytarget(p); source(e)=source(p) and target(e)=Y
_{X}/p, where Xsource( p); source(e)=X and target(e)=target(p)
a projection _{X}, over any source or target of a path expression
Roughly speaking, each path expression is a well formed expression whose operands are arrows from S and whose operators are those of the functional algebra.
Given a path expression e over S, and a database over S, the evaluation of e with respect to , denoted eval(e, ), is defined as follows :
1/ replace each arrow f in e by the function (f);
2/ perform the operations of the functional algebra (as indicated in the expression);
3/ return the result (which is a function).
Given a star schema S, the definition of an OLAP query is based on what we call an OLAP pattern over S (or simply pattern). Intuitively, a pattern over S is a pair of paths starting at the root of S. More formally, a pattern over S is a pair
of path expressions over S such that source(p)=source(q)=R.
A pattern
, where target(p)=L and target(q)=M, is denoted graphically as shown below:
p q
L R M
Intuitively, each pattern provides a setting where the following tasks can be performed:

group the Rvalues according to their Lvalues under p

within each group apply some aggregate operation to the set of all images of Rvalues under q
Consequently, L is called the aggregation level and M the measure level of the pattern.
The above considerations lead naturally to the following definition of OLAP query:
An OLAP query over S is a pair Q=
, where P is a pattern over S and op is an operation over the measure level of P usually an aggregate operation. For example, if the measure level of P has int as domain, then any operation over the integers (such as sum, product, average, maximum, minimum, etc.) can be declared as the operation of the OLAP query Q.
Given a database over star schema S, and an OLAP query Q=
over S, the evaluation of Q with respect to , denoted eval(Q, ), is a set of pairs i, res_{i}> defined as follows (assuming Q=
as above) :
1/ p:= (p); q:= (q) % let range(p) = {l_{1}, …, l_{m}} %
2/ compute the inverse function p^{1} % let B_{1}= p^{1}(l_{1}), …, B_{m}= p^{1}(l_{m}) %
3/ res_{i}:= op(1), …, q(b_{ni})>), i= 1,..,m % let B_{i}= {b_{1}, …, b_{ni}} %
4/ return {(l_{1}, res_{1}), …, (l_{m}, res_{m})}
Note that the result of the evaluation is just a relation –not necessarily a function. Also note that it is not necessary to apply the same operation to every block B_{i}: one may very well specify different operations to be applied to different blocks B_{i} (eventually, a different operation op_{i} for each block B_{i}). Finally, note that the computation of the inverse function
p^{1 }can reuse previously computed inverses, using the following “formulas” that are easy to prove:
composition : (gf)^{ 1}(z) = f^{1}(g^{1}(z)),
i.e., the zblock under gf is the union of all yblocks under f, where y is in the zblock
pairing : (fg)^{ 1}((y, z)) = f^{1}(y)g^{1}(z)
restriction : (_{X}/f)^{ 1}(y)= Xf^{ 1}(y) ; and (f/_{Y})^{ 1}(y)= f^{ 1}(y) if y is in Y and empty otherwise
Using an SQLlike construct, we can specify an OLAP query as follows:
Select dimlevel , op(measurelevel) as
From dim , meas.
Where
Data Marts
Informally, a data mart is a subjectoriented data warehouse whose data is derived from those of the “central” data warehouse. More formally, consider a data warehouse with schema S. A data mart over S is a data warehouse with schema S’ such that every arrow f in S’ is defined by a path expression e over S, i.e., f= e. In other words, a data mart is just a view of a data warehouse.
For example, referring to Figure 2, we may wish to create a data mart dealing only with monthly sales per store and item. Such a data mart will have the following as a star schema (where R= {Date, Store, Item}):
R Month R Store R Item R Sales
Of these arrows, the first is the composition of two arrows from the data warehouse schema, while the remaining ones are arrows that appear directly in the data warehouse schema.
As in the case of relational views, the data of a data mart are derived from those of the data warehouse, by evaluating the path expressions defining the arrows of the data mart schema. Moreover, this derived data may or may not be stored depending on the application. In other words, as in the case of relational views, we may have virtual or materialized data marts.
We note that, as the cost for setting up a central data warehouse is usually very high, several medium size enterprises start by setting up a subject oriented data warehouse, i.e., a data mart. The advantage of this approach is that one can get acquainted with data warehouse technology, at low cost, before embarking at a full scale data warehousing. The disadvantage is having to integrate one or more data marts into a single, “central” warehouse, if the enterprise decides to invest in that direction at some later time.
Implementation Issues
The performance of OLAP queries depends on the way the star schema is implemented. A star schema can be implemented using either relational variables (i.e., tables) or multidimensional arrays (eventually some mixture of the two). These two ways are known as Relational OLAP and Multidimensional OLAP, respectively.
Relational OLAP (ROLAP)
In relational implementations, a star schema is represented by a set of tables defined as follows:
 there is one table F containing the attributes of R together with all measure levels of S, called the fact table of S;
 for each dimension D, there is one table containing all the dimension levels of D, called the Ddimension table (or Dtable for short).
It follows that each arrow of S appears in some table and constitutes a functional dependency of that table.
In real applications, there is also an inclusion dependency (or referential constraint) from the fact table to each dimension table. This is due to the fact that each Ddimension table usually stores all possible Dvalues, whereas only a subset of these values are present in the fact table at any given moment (in the example of Figure 2, only “sold” items are present).
It is important to note that the dimension tables are not necessarily in normal form of any kind. However, this is not seen as a drawback, since a data warehouse is a readonly database. Quite to the contrary, it is seen rather as an advantage: keeping the dimension tables nonnormalized facilitates schema comprehension (thus facilitates query formulation) and efficient processing of OLAP queries (fewer joins).
By the way, when the dimension tables are nonnormalized then S is called a starjoin schema, and when the dimension tables are decomposed in order to obtain some normal form then S is called a snowflake schema.
The implementation of the functional algebra operations and the evaluation of OLAP queries are done as follows:

projection is implemented by relational projection;

composition by join followed by projection over the source and target attributes of the path expression;

pairing by join;

restriction by selection;

and the inverse of a function by the groupby operation and its recent SQL extensions (grouping sets, rollup, cube).
Multidimensional OLAP (MOLAP):
Another way of implementing star schemas is by using multidimensional arrays, as follows:
 if D_{1}, …, D_{n} are the attributes of R and M is the set of measure levels, then an array [D_{1}, …, D_{n}] of M is defined, called the data cube;
 a number of procedures are made available to implement the various operations needed for the evaluation of OLAP queries over the data cube or along the various dimensions.
The data cube is actually the fact table of ROLAP viewed in an ndimensional space, in which each point (also called a data cell) is associated with one Mvalue (at most).
The main advantage of this implementation (which is very popular currently) is the easytograsp image offered by the data cube. There are however two major problems with this implementation:
(a) The data cube is usually very sparse. Indeed, the definition of the multidimensional array requires D_{1}xD_{2}x…xD_{n}storage cells, while only a very small subset of them are associated with an Mvalue at any given moment in time (this is not the case with ROLAP).
(b) There is no “multidimensional algebra” (at least not yet) to support the functional algebra operations and the evaluation of OLAP queries (i.e., the analogue of the relational algebra and SQL). Instead, there is a great variety of ad hoc operations whose design is driven mainly by (physical) performance considerations.
In fact, in order to remedy these drawbacks, various implementations have appeared that keep the concept of data cube as a user interface while implementing most of the operations using a relational approach. These intermediate implementations are known, collectively, under the name HOLAP (Hybrid OLAP).
Knowledge Discovery in Databases (KDD)
Query languages help the user extract, or “discover” information that is not explicitly stored in a database. For example, in a transactional database containing the tables E(Emp, Dep) and M(Dep, Mgr), one can join the two tables and then project the result over Emp and Mgr to extract a new table: (Emp, Mgr), not explicitly stored in the database.
In a data warehouse, as we have seen, one needs to extract more sophisticated information for decision support purposes. For example, one can ask for the average monthly sales of products, per region and product category. The processing of such a query involves groupings and aggregate operations over large volumes of data, in order to provide the results necessary for report generation.
In recent years, decision support needs even more sophisticated data processing, whose result comes under various forms:
 classification of data items into predefined classes;
 clustering of data items, according to some similarity measure;
 rules describing properties of the data, such as cooccurrences of data values; and so on.
We also note that information extraction in the form of classification, clustering or rules has been studied since several years in statistics (exploratory analysis) and in artificial intelligence (knowledge discovery and machine learning). However, the data volumes of interest in those areas are relatively small, and the algorithms used do not scale up easily to satisfy the needs of decision support systems.
In what follows, we shall explain briefly how classification and clustering work, and then we shall see (in some detail) only one kind of rules, the so called “association rules”.
Classification
Classification is the process of partitioning a set of data items into a number of disjoint subsets. These subsets are defined intentionally, in advance, usually by specifying the properties that each subset should satisfy. For example, consider the table T(ClId, Age, Sal) contained in the database of a bank. One can partition the set of tuples contained in this table into four disjoint subsets, defined by the following properties (or conditions):
(Age£ 25) and (Sal£ 2000)
(Age£ 25) and (Sal> 2000)
(Age> 25) and (Sal£ 2000)
(Age> 25) and (Sal> 2000)
These four subsets can be represented by a binary tree in which
 the root is labeled by the table T;
 each other node is labeled by a subset of tuples;
 each edge is labeled by the condition defining the subset represented by its end node.
In this representation, each subset is defined by the conjunction of the edge labels in the path leading to the leaf. We note that one or more subsets might be empty, depending on the current state of the table T.
Clustering
Clustering is the process of partitioning a set of data items into a number of subsets based on a “distance” (i.e. a function that associates a number between any two items): items that are “close” to each other are put in the same set, where closeness of two items is defined by the analyst. Thus, in the table T of our previous example, one might define a distance by:
dist(s, t)= abs(s.Salt.Sal), for all tuples s, t in T
Then one could cluster together all tuples s, t such that dist(s, t)£ 1000. We note that the number of clusters is not known in advance and that no cluster is empty.
We note that the groupby instruction used in databases, and its extensions for data warehouses do perform classification and clustering, although of a limited form (i.e., using a specific set of criteria).
Association rules
Consider a (relational) table R whose schema contains n attributes A1, A2, .., An, such that dom(Ai)= {0, 1}, for all i; with a slight abuse of notation, we shall assume R= {A1, A2,…, An}. Consider also an instance of R, i.e., a set r of tuples over R (with tuple identifiers). An example of such a table is shown below, where the attributes are the items sold in a supermarket and one can think of each row as representing a transaction (a cashier slip): a 1 in the row, under item Ai means Ai was bought, and a 0 means the item was not bought, during transaction t.
Tid A1 A2 … An
. . . .
. . . .
t 1 0 1
. . . .
. . . .
An association rule over R is an expression of the form XÞB, where XÎR and BÎ(X\R). The intuition behind this definition is as follows: if the rule holds in r then the tuples having 1 under X tend to have 1 under B. As a consequence, such rules are useful in describing trends, thus useful in decision making. In this context, it is customary to use the term “item” instead of “attribute”, and “itemset” instead of “attribute set”. There are several application areas where association rules prove to be useful:
 In supermarket data collected from barcode readers: columns represent to products for sale and each row represents to products purchased at one time.
 In databases concerning university students: the columns represent the courses, and each row represents the set of courses taken by one student.
 In citation index data: the columns represent the published articles, and each row represents the articles referenced by one article.
 In telephone network data collected during measurements: the columns represent the conditions, and each row represents the conditions that were present during one measurement.
We now introduce some concepts that are related to association rule extraction:

Given a subset W of R, the frequency or support of W in r, denoted s(W, r), is the percentage of tuples in r having 1 under W, i.e.,
s(W, r)= ï{tÎr: t(A)=1 for all AÎW}ï/ïrï
We say that W is frequent in r if s(W, r)³a, where a is a userdefined threshold. Clearly,
if WÍZÍR then s(W, r)³s(Z, r).

Given an association rule XÞB, the frequency or support of XÞB in r, and the confidence of XÞB in r, denoted s(XÞB, r) and c(XÞB, r), respectively, are defined as follows:
s(XÞB, r)= s(XÈ{B}, r) and c(XÞB, r)= s(XÈ{B}, r)/s(X, r)
It follows that
0£c(XÞB, r)£1
The main problem here is to find all rules XÞB (i.e. for all X and B) whose support and confidence exceed given thresholds, minsup and minconf, given by the analyst. Such rules are called interesting rules. We stress the fact that, usually, no constraint is imposed on X or B, thus some of the extracted interesting rules might be unexpected.
Clearly, the search space for interesting rules is exponential in size, hence the need for efficient algorithms. In what follows, we describe such an algorithm and its implementation in SQL. The approach to design this algorithm uses two steps:
Step 1
find all frequent subsets X of R
Step 2
for each frequent set X
for each B in X
check whether (X\{B})B has sufficient high confidence
The approach also uses a basic fact which is easy to show:

every subset of a frequent set is frequent
An immediate corollary is the following:

if X has a nonfrequent subset then X is not frequent
Based on this last result, we can now describe the basic computing steps of an algorithm for finding all frequent sets:

find all frequent sets of size i (we start with i=1);

use the frequent sets of size i to form candidates of size i+1
(a candidate of size I is a set of cardinality i+1 whose each subset of size i is frequent);

among all candidates of size i+1 find all those that are frequent;

continue until no more candidates are found.
More formally, the algorithm can be stated as follows:
Algorithm FFS
(Find all Frequent Sets)

C:= {{A}: AÎR};

F:= Æ;

i:=1;

while C¹ Æ do begin

F’:= {X: XÎC and X is frequent};

F:=FÈF’;

C:= {Y: ïYï=i+1 and for all WÍY with W=i, W is frequent}

i:=i+1; F’:= Æ

end;
Clearly, the algorithm requires only one pass for step 5, thus the algorithm reads the relation r at most k+1 times, where k is the size of the largest frequent set. It follows that the worstcase complexity of the algorithm is k+1 (in terms of passes over the data).
As for the implementation of the algorithm over a relational DBMS, we note that the algorithm needs to compute supports of the form s({A1, A2, …, Am}). Each such support can be computed by a simple SQL query of the following form:
select count(*) from R where A1= 1 and A2= 1 and … and Am=1
Moreover, all these queries have the same form, thus there is no need to compile each one of them individually.
The above algorithm is a slightly modified version of the well known Apriori Algorithm, whose principles are found in one form or another in most software tools for the extraction of association rules from large tables.
Several remarks are in order here concerning association rules and their extraction. First, the table used for the extraction is usually not a relational table but can be derived from such a table. Moreover, the relational table itself can be either a database table or the result of a query to the database (in either case, it can be converted to a table where the extraction will take place). In this respect, we note that the process of information extraction from databases, better known today as Knowledge Discovery in Databases (KDD) is inherently interactive and iterative, containing several steps:
 understanding the domain;
 preparing the data set;
 discovering patterns (Data Mining);
 postprocessing of discovered patterns;
 putting the results into use.
As for its application areas, they include the following:
 health care data (diagnostics);
 financial applications (trends);
 scientific data (data patterns);
 industrial applications (decision support).
Second, in typical applications of association rule extraction, the number of extracted rules might be quite large, from thousands to tens of thousands. In such cases the extracted association rules might be impossible to exploit. Several approaches have been proposed recently with the objective to lower the number of extracted rules :
 partial specification of X and/or B (i.e. constraints on X and/or B);
 reduced sets of rules (similar to reduced sets of dependencies);
 rule generalization (children clothingÞdairy product instead of children pantsÞmilk).
These approaches together with convenient and userfriendly visualizations of rules tend to render their use easier.
As a final remark, we would like to comment on the way association rules are likely to be used in practice. To this effect, let’s consider an example. Suppose that the association rule PenÞInk has been extracted as an interesting rule (i.e. one whose support and confidence exceed the predefined thresholds). Such a rule implies that customers buying pens have the tendency to also buy ink. One scenario in which this rule might be used is the following: assuming a large stock of unsold ink, the shop manager might decide to reduce the price of pens, in the hope that more customers would then buy pens (and therefore ink, following the tendency revealed by the extracted rule). However, such a use of the rule may lead to poor results, unless there is a causal link between pen purchases and ink purchases that somehow guarantees that the tendency will continue in the future as well. Indeed, association rules describe what the tendency was in the past, not what will be in the future: they are descriptive and not predictive rules. In other words, it may happen that, in the past, all customers bought ink not because they bought pens but simply because they usually buy all stationery together! If such is the case then the extracted rule PenÞInk has been found to be interesting because of simple cooccurrence of pen purchase and pencil purchase and not because of the existence of a causal link!
In conclusion, although association rules do not necessarily identify causal links, they nevertheless provide useful starting points for identifying such links (based on further analysis and with the assistance of domain experts).
Software Application
We use the Mondrian platform on: http://dup2.dyndns.org:8080/mondrian/testpage.jsp
Country
Date Store Item
Sales Date Month Store City State
Categ.
Manuf.
Item
Figure 2  A Dimensional Schema
