Altair® Panopticon

 

[2] Using Altair Panopticon Visualization Server with a Designer Role

Introduction

Visual Data Discovery is performed through workbooks. A workbook is a collection of:

q  Dashboards (Visual Layouts)

q  Data Tables (Data Query and Schema Definitions)

q  Actions(Contextual Interaction Definitions)

q  Overall styling

Dashboards may consist of several parts including: visualizations, legends, filters, action controls, labels, and images.

Data tables output both data schemas and data conduits, and define the queries and source data repository definitions, in order to retrieve data. They do not store data but are simply the conduit to which data flows through.

The core of the product is the processing of data, which can range from Real Time Streaming datasets, that are retrieved asynchronously, to static and historical datasets and are retrieved synchronously on a defined periodic basis. It is assumed that data is never at rest, and consequently, data refresh is an automatic operation across all datasets.

Data sources can be connected to directly, with data retrieved on the fly as it is required.

Alternatively, on slower underlying data sources, the data can be extracted locally on a scheduled, or ad hoc basis. This locally extracted data can then be queried, minimizing query latency, but increasing the risk of stale data.

Data can be accessed in a number of methods, depending on the need and source repositories capabilities:

q  Retrieve all data into memory

For example, retrieving an MS Excelspreadsheet.

q  Retrieve subsets into memory, which may be summarized, or parameterized

For example, retrieving a summary view, and then retrieving a detailed dataset, based on the selection in the summary view. This method provides very tightly controlled data retrieval times but requires the paths through data to be pre-specified, with pre-defined data queries (including stored procedures).

q  Retrieve only required results into memory, by querying on demand, pushing aggregation and filtering tasks to underlying big data repositories, or queryable data extracts.

This is commonly known as a ROLAP implementation, where the product is dynamically writing data queries to the underlying data repository and retrieving aggregated and filtered datasets. Given the on-demand nature of this method, it is more suitable to exploratory data analysis but requires dynamic query generation.

In cases where there is too much data to retrieve into memory, data access can be direct to the underlying source, or through the data extracts created in the Panopticon Visualization Server. As the data extract supports on demand queries, summarization, and parameterization, it can become a more powerful option than a number of underlying data sources.

Data extracting is available for non-streaming data sources and can be globally used across all workbooks.

In the following sections the product will be demonstrated, starting with the various layouts, the definition of data retrieval and then the building of dashboards.