Thursday, March 15, 2018

Tableau and MongoDB Analytics: Why It's a Bad Marriage

The Challenges of MongoDB Analytics with Tableau

First off, we are not here to bash Tableau. Tableau is an excellent analytics tool for structured relational data. Our point here is that data has moved beyond well-understood structured data and now includes semi-structured and unstructured data stored in newer NoSQL databases, like MongoDB. Trying to using analytics tools that are architecturally committed to relational structures for analytics on MongoDB (NoSQL) is the definition of putting a square peg in a round hole.

To be fair, Tableau was developed before NoSQL before Big Data. It's designed to understand SQL and nothing else, making analytics on newer data sources a significant challenge. To do it, customers typically perform data discovery somewhere else to model the data so they can build transformations and mappings to load it into a relational structure like a MySQL table. This action is accomplished either through ETL or using ODBC drivers that "map" unstructured data into a table-like structure, which is exactly what the MongoDB BI Connector does.

MongoDB BI Connector Description

In case you missed it, the BI Connector is moving data out of MongoDB into MySQL tables so Tableau will work. MongoDB is a powerful database solution for modern data. Moving data out of MongoDB into MySQL for analytics seems to somewhat defeat the purpose of the investment. At some point, you have to ask how much overhead and artificial data limitation is worth it to keep your investment in these legacy BI tools?

This question should be top of mind as Big Data becomes deeply integrated into your operational analytics, and you begin to explore how to leverage advanced analytics and machine learning to develop new products and services. These systems are already complicated, so it's hard to imagine how these legacy BI tools stay relevant when they add so much unnecessary complexity, overhead, and cost while limiting data availability and potentially impact data fidelity.

Data engineers and business teams want the same capabilities of instant data discovery and self-service analytics with MongoDB data. They don't understand the complexities behind why Tableau and other SQL-based analytics tools struggle to work in the same way with MongoDB as with a MySQL database. Because they don't understand the complexities, they have little patience for waiting weeks or months for MongoDB data to be made available for analytics.

This lack of patience and business leaders expectations that analytics will drive decision making at all levels of their organizations means a fundamental shift in how data and analytics teams integrate modern unstructured and semi-structured data into their analytics architecture is underway. There is growing intolerance for building heavy ETL processes to move, transform, prep and load data into a staging area. In addition to slow projects down, the cost of changes is high making experimentation less likely to happen. The trend is towards simplifying data architectures with native integration to these modern data stores, like MongoDB, Cassandra, Couchbase, etc. Today, in many cases, to go native means building custom code and processes which limited the number of teams that could access the data. Again, this is pushing analytics tools to step up and manage data from new data sources differently and not require it to be moved and transformed back into relational structures.

We are at the early stages of the next wave of innovation in analytics where you will see changes in how analytics platforms interact with newer data sources and learn how to handle structured, semi-structured and unstructured data in the same way. Only then will business teams be able to leverage their data fully and experiment with new insights, machine learning and create data-driven actionable intelligence.


Our mission at Knowi is to simplify and shorten the distance between data and insights for all data: unstructured, structured and multi-structured. To accomplish we believe you need to: a) leave data where it is and b) enable data engineers to explore all data without any restrictions that result from mapping it to a relational structure.
We are a certified MongoDB partner and the only analytics partner to natively integrate. No ETL. No ODBC drivers. No proprietary query language

You can play around with a NYC Restuarant dataset in our MongoDB sandbox to see for yourself how nice it is not to move your data out of MongoDB to analyze it. 

We also natively integrate to most other leading NoSQL, SQL, RDMS data sources, and REST API's enabling data engineers to create blended datasets and visualizations in minutes.

Sign up for free at

Friday, February 23, 2018

Will Cockroaches and Data Silos Be the Only Things Left? Part II

Data Services is the Answer!  Yes, but...

data services for analytics
In our last post, we talked about using a data warehouse strategy as one of the ways to break down data silos across multiple departments and systems.  Building a data warehouse is a traditional way to tackle the data silo problem.  However, successful projects take serious organizational commitment and months of development time.  With data moving faster than ever and business teams increasingly looking for the ability to rapidly experiment with analytics, the wait time for a data warehouse often leads business teams to move on before the warehouse can even be deployed.  As a result, business and IT teams are looking for other ways to unify data across disparate data sources.

The second option is to build a data services layer where data engineers can query disparate repositories, including unstructured and structured data, to build blended data sets for business teams.  There a many advantages to this approach over building a data warehouse.

Benefits of Data Services vs. Data Warehouse Approach

No moving data

The elimination of building ETL processes or developing custom extract and load jobs to wrangle data from your various sources, transform it into a relational structure and then load it into your data warehouse is arguably the most significant benefit of using a data services strategy over a data warehouse strategy.  Data services, by their nature, do not move data from the source systems and allow you to blend data to create virtual datasets.  For example, you can pull data from your MySQL database, blend it with data from your Cassandra data store and create a new data set for use in analytics.  However, there is another factor to consider.   If you're looking at data virtualization solutions, many still require data to be transformed into a common relational structure before it can be used.  Data has evolved beyond well-understood relational models so forcing it to conform adds cost and complexity so choose your solution wisely.  

Free advice:  Make sure the data services layer in your solution natively supports (no drivers to install) unstructured and semi-structured formats from NoSQL and REST-API sources so you can avoid the need to transform your data and shoehorn into back into a relational structured.  Even if you are only using structured data today that may not be the case tomorrow as most new data that is interesting to explore is semi-structured or unstructured.

Experimentation and agility

Most business teams and leaders understand that analytics can make the difference between profit and loss or beating the competition or taking a beating.  As analytics becomes critical to a companies ability to compete in the future, agility in building new data pipelines also becomes critical.  With a data services layer that natively integrates with unstructured and structured data sources, you give your data teams need the ability to rapidly discover and experiment without the overhead of updating schemas and ETL processes.  By unshackling them from a pre-defined schema, they can transition to an iterative agile development model for building data analytics products and work closely with the business to rapidly experiment and refine.  Analytics products built in this manner are much more effective in moving business teams towards data-driven decision making because they deliver exactly what teams need much faster.  If business teams have to wait weeks or months for their change requests to be acted upon, they have moved on.  

Free advice:  Ask yourself how difficult it is to add a field, a table, a new data source into your existing analytics architecture.  If the answer is, "I'd rather have a root canal" then you might have a problem.  Your data services layer should resolve this problem, not contribute to it. Be sure you're not adding barriers to experimentation by forcing conformance to a relational structure when you have semi-structured or unstructured data sources in your stack.

Reduced cost of ownership

All things considered, the data architecture when using a data services layer should be less complex than that of a data warehouse simply because you are not moving data and no pre-defined schemas are used.  With reduced complexity comes a reduction in costs to build and maintain the architecture as you need fewer resources to develop the data pipeline and the cost to make changes is relatively low.  

Free advice:  I can't emphasize enough that simplification goes out the window as soon as you start transforming unstructured data back into a relational structure so this benefit assumes native integration with no use of drivers, etc.  Building the integration may sound difficult but there are tools out there that have already solved the native integration problem.  We are one of them but there are others.

In our humble opinion, the need to move, flatten, transform and apply structure to unstructured data should be a thing of the past.  We are evidence that there is an emerging wave of new analytics tools that are leading the way to the future of data analytics where business self-service, experimentation, and data agility thrive.  Come catch the wave with us!   Sign up for a free trial here 

Thursday, January 18, 2018

Knowi Product Update Q4 2017

To see these exciting new capabilities in action, please join Lorraine Williams, Head of Success at Knowi, for a web demo on Wednesday, January 31th at 11:00 AM PT.

Lovin' Query Management

Our query capabilities are at the heart of what makes Knowi different.  We constantly add capabilities but in the past few weeks we've focused our efforts and added a number of enhancements:  

Join Builder
In addition to performance improvements to the Join functionality, we now provide query join assistance from within the Query page. The supported joins types are listed along with auto-detected key field candidates for each side of the join. You can, of course, still enter the join criteria manually.

Join Query Help
Save Query as Draft
You now have the ability to save a query in progress without creating an associated dataset and widget.  Go for that coffee break!

View Query Change History
Wondering who messed up your query?  Wonder no more.. you now have the ability to view an audit history of query changes. This is only applicable if the user has edit permissions for the query in question. From the Query Listing page, a history icon is now available.  When clicked, the user will username and timestamp of when each change was made.

Query Filters Suggestions
In Query, filters auto-suggestions and hit list filter capabilities can now be seen from within the Query Builder itself.

Join Post-Processing
You can now apply Cloud9QL functions to a dataset post join.

Preview Data at Each Join Step
You can now preview dataset results at each join step.  This can be especially useful when you have multiple join steps.

We're Getting Slacky

Slack integration allows you to trigger actions in your slack channel(s) for a given condition triggered by an alert. When the condition is triggered, we'll send a message to a predefined channel(s) including the attachment of full data or conditional data depending on the options selected.

Stranger Danger

Enterprise data security is top of mind for everyone.  Whenever we can, we leverage new security capabilities from our database partners as quickly as possible.

SSL Support
We now support SSL enabled MarkLogic and Datastax/Cassandra

Role-based access control (RBAC) Support
We now support RBAC in Couchbase 5.0

Access Control List
The system now supports the ability to create white and black lists of datasource assets (tables/collections/indexes). This will allow the datasource creator to specify those assets available to subsequent queries. The datasources that support the ACL functionality are currently: Elasticsearch, Oracle, Knowi Elasticstore

Other Cool Stuff

Email Reporting Improvements
Parametrized Report Templates The Email Report function has been enhanced to pass in user-lever query filters, ensuring only the data the recipient is allowed to see is contained within the report. Any dataset attachments also adhere to the passed in parameters.

Analyze Grid Formatting
A number of usability enhancements we made including:
  • Ability to view statistical data for numerical columns
  • Added formatting options for numeric and data columns: currency, date, percent and decimal place
  • Ability to resize columns
  • Added Count option for column aggregation
  • Added 'does not equal' as an operand in the conditional grid formatting options 
Embed API Formatting
An option has been added into the JS Embed API that allows for auto-sizing of content based upon the full height of the dashboard. 

 New Datasources
Added support for
  • Apache Hive
  • Couchbase 5.0
Cloud9QL Enhancements
Cloud9QL Function AutoComplete
When adding a function in Analyze or preview modes, the system now gives a dropdown list of C9QL functions available along with autocompleting capability

A new CLoud9QL function has been added that allows you to control the display of numerical values. The format is NUMBER_FORMAT(<number>,<format>), and an example is:   select number_format(clicks,##,###.00) as Number of clicks

If your data is a JSON string, the PARSE function can be used to convert it into an object which can then be further manipulated and processed.

Provide an alternate value to be used in case the specified field doesn't exist or the value is NULL.

Wednesday, January 10, 2018

5 Most Influential Data Analytics Trends for 2018

2018 Data Analytics Trends

Data Analytics Trends 2018
Happy New Year!  As we enter the year, the question always comes up, “what are the data analytics trends that we should pay attention to in 2018?”  The technology trends we see are emerging to support a couple of large-scale business analytics trends.

First, there is a slow but steady movement away from process-driven towards data-driven decision-making.  Future thinking organizations understand that 1+1 does not always equal 2 if other factors are considered so following a linear decision-making process may not always result in the desired outcome.  Data provides a clear picture of internal and external influences that may impact outcomes, positively or negatively.  Getting up-to-date information in the hands of the right people up and down the organization, in a context they can use, is becoming critical to ensure future competitiveness. 

Second, there is widespread realization that data is an asset and a valuable one at that.  While analysts have been talking about data monetization for a while, most organizations are still trying to get their data to work for them internally.  In 2018 many will continue to focus their analytics thinking inwards, but those who have figured it out are starting to look outward.  They are changing how they think about analytics to include how their customers and partners might also benefit. It may be a while before we see a proliferation of data marketplaces, but organizations are starting to think about how they can use their data to create new data-driven products and services and open up new revenue streams.

With these high-level business drivers in mind, here is our take on the 5 Most Influential Data Analytics Trends for 2018. 

An analytics architecture rethink is underway.

Data Architecture Team
For most companies, single stacks data architectures just don’t fit anymore.  Modern data stacks optimize for different types of data and use case, so most enterprises have a mix of RDBMS, SQL, NoSQL and Cloud APIs.  Trying to funnel all the right data into a data lake, data warehouse or other reporting data store is increasingly more difficult and time-consuming, as business and data move faster than ever before.  To keep up with business demands for agility and rapid experimentation data analytics architectures will need to adapt and modernize to cope with an evolving future.

In 2018, this means analytics platforms that venture into this modern data environment will stop being simply data visualization tools.  The next wave of BI solutions will take on more data management (data plumbing) capabilities to eliminate the need for ETL and data prep tools and simplify the data analytics stack.  They will natively interact with structured, unstructured and semi-structured data sources, be highly extensible, embeddable, and include published APIs to insert into operational workflows.  These capabilities will allow for the merging of analytics and applications into data-driven applications and transform BI tools from monolithic single destination applications to analytics distribution frameworks that fuel business transformation.

Data science, data discovery and data engineering converge to support rapid experimentation.

Data science, data discovery and data engineering convergence
Is 2018 the year Big Data just becomes data?  I don’t know, but it seems that as NoSQL technologies become mainstream (note: MongoDB IPO), it makes less and less sense to distinguish Big Data from any other kind of enterprise data.  Business teams want to explore large data sets, add context from different sources and build advanced analytics.  For the most part, Big Data and AI are segmented off and use their own set of tools and resources which limits the ability of an enterprise to do interesting things beyond a few use case level experiments.

2018 will be the year that CTOs and CDOs realize that to transform into a data-driven enterprise they need a unified view of all their data as well as a data architecture and culture that encourages experimentation.  New technologies like AI need to be integrated so the work done by data scientists and data engineers can be shared with a broader audience and leveraged enterprise-wide.  This is more than just a shift away from old data architectures but requires rethinking how business teams, data engineers, and data scientist work together in an iterative, agile, development process that is better suited for rapid experimentation.

Data-driven applications and services create new revenue streams.

Data-driven application drive revenue growth
Forward-thinking companies understand that analytics can change experiences for the better.  By providing analytics as part of product or service offering, customers stay longer and do more, making these new data-driven applications more valuable to the consumer.  This value converts to a willingness to pay more, higher retention rates, and long-term strategic relationships which are the stuff of every product managers dreams.

In 2018, product managers will ride the wave of experimentation focused analytics architectures on driving top-line growth.  Product managers will need the ability to experiment with all available data, easily and securely embed analytics into applications and iterate quickly in a self-service environment. As an example, one of our customers, Boku, is monetizing their analytics by providing financial reporting to their merchant and carrier partners.  You can read more here.

Public cloud, private cloud, on-premise deployment?  Yes, please.

analytics architectures must support hybrid environments
Cloud-first strategies continue to be the favorite option for analytics, including Big Data analytics, because of the reduced onboarding friction and greater flexibility.  However, enterprise data stacks are not necessarily on the same path.  With regulatory, security, variable costs, and performance concerns, many enterprises have opted out of the cloud for some applications.  Additionally, NoSQL technologies are optimized to store certain types of data and serve specific use cases.  For example, enterprises use MySQL to store customer information, MongoDB to store customer interactions from their website, and Elasticsearch is used to enable customers to search large data sets very quickly.  Reporting platforms need to pull data from potentially all these sources to add context and answer even basic questions around how engaged a customer is with a product or service.  There is little indication that enterprises will standardize on a single stack or environment, in fact, the opposite is true.

In 2018, there will be a push to modernize analytics architectures to handle increasingly fragmented data and applications architectures without requiring data to be moved and transformed as this limits the ability to be agile and experiment.  Analytics platforms that play nicely in the cloud API’s and can navigate any variety of hybrid environments will have the advantage.

Reporting moves beyond single destination dashboards.

future analytics move beyond dashboardsMass adoption of analytics is often a barrier to transitioning into a data-driven enterprise.  The reasons are two-fold.  First, analytics platforms typically don’t serve different users with varying skills sets well.  Second, getting up-to-the-minute information in the hands of the right people is pretty hard.  These gaps leave groups of people either reliant on other team members for reporting while others just give up. 

In 2018, the push toward data-driven enterprises continues, and analytics platforms will start to move the boundaries of where reporting happens to challenge long-standing barriers to enterprise-wide adoption.  Logging into an analytics platform and looking at a set of dashboards isn’t going away anytime soon.  However, what you will see in 2018 is a broader use of highly contextualized analytics pushed to where the user lives.  This includes targeted analytics built into data applications and starting to leverage AI to make analytics more interactive.  While talking to your analytics platform or type a question into Slack and having a dashboard appear sounds a bit gimmicky today, did you think five years ago it would be normal to ask a device sitting in your kitchen to turn on the lights?  We are still in the early stages of this transition away from desktop dashboards to analytics everywhere, but 2018 will see a significant step forward on the journey.

Friday, December 22, 2017

Why the World Needs Another Business Analytics Tool

Future of business analytics

Time to Rethink Business Analytics Architectures 

I said to a friend not too long ago that I'm going to do a new Business Intelligence startup. His response was, "Just what the world needs, another BI solution." After telling him to stop being a nitwit, I realized that I would have to answer this question: Why the world needs another business analytics tool? Well, it doesn't. Not in the sense you are thinking, anyway. Let me explain.

When someone says BI or data analytics tool, I would hazard to guess you think about data visualizations and dashboards. However, to get to the point you can do visualizations and create awesome looking business dashboards, your data has already been moved, transformed, aggregated, moved, joined, and moved again until it finally lands in a prepped relational form in a SQL-friendly database. Traditionally, BI tools have left much of the heavy lifting for data analytics to middleware and data integration tools, like Talend, which extract-transform-load (ETL) data from various sources to a staging/reporting area.

Modern data sourcesThat worked great ten years ago when data was relational, structured and prepped but the data stack has completely changed in the last seven years. Now you’ve got SQL databases co-existing with NoSQL databases that are workload optimized. You've got Elasticsearch for searches on large sets of data and MongoDB for storing general purpose semi-structured data, along with REST API's. At the same time, with over 40 years of history, relational databases aren’t going away. They are going to remain in the enterprise for the foreseeable future. 

So while in the past decade, data itself have massively evolved, business analytics tools, even newer Cloud BI solutions, have not. They are still architected for smaller, structured, prepped relational datasets. The result is fragmentation of enterprise data architectures to include various analytics, data integration, and data prep solutions to handle the gap between what traditional BI tools can handle and the reality of modern data stacks that include structured, semi-structured and unstructured data. 

Data architecture are fragmented

Now, look at what you're are trying to accomplish with your data analytics in the next few years. Most enterprises understand their data is becoming a valuable asset. How well you leverage it will positively or negatively impact your future competitiveness. Competitive advantage will come from transitioning to a data-driven enterprise, creating new data products and services and driving actions with real-time analytics. But to get there, your BI tools have to work with data that is essential to you no matter its source, size or speed.

I know almost all the existing BI tools claim to support modern data with their "native" connectors and drivers, etc. The reality is these drivers use ODBC frameworks which were built 20 years ago for relational data. The whole point of their existence is to have some sort of translation layer BI tools can understand. What I mean by “understand” is a column and row model. But that model is no longer applicable because data is no longer structured this way. Trying to use them for unstructured or semi-structured data is like putting a square peg in a round hole.

data analytics maturity scaleEnterprises who have a multitude of data sources are still ironing out how to get a unified view of all their data to support data agility and, more importantly, experimentation. I would argue to achieve the level of data agility required for digital transformation, you have to significantly reduce, if not eliminate, ETL processes and tools involved. Once you can provide an enterprise-wide unified view of data, that becomes the fundamental building block into like predictive analytics & machine learning, natural language queries, prescriptive actions, etc. The difference from what we see today is business analytics innovations are applied enterprise-wide not just in one or two departments for specific use cases.

In short, the world doesn't need another BI tool, it needs an analytics platform that completely rethinks data and analytics architectures for modern data. Where ETL is minimized, if not eliminated. Where any kind of data can be analyzed and insights are visualized instantly, anywhere. Where business users interact with business analytics naturally and where data drives actions at all levels of your organization. Where companies can embed analytics easily to drive new monetization opportunities using their data. Where historical data can seamlessly combined with Machine Learning to drive insights and actions.  

Business leaders understand that analytics can transform their business. Now its time for analytics vendors to build the platform to get them there. Our vision for Knowi is to lead the next wave of data analytics solutions that completely change how enterprises build, interact, predict and monetize their data. 

Thursday, November 2, 2017

Real World Healthcare Analytics Dashboard Examples

It can be hard to find real-world examples of how organizations are using analytics and dashboards in to manage specific aspects of their business.  Recently, our partner Sagence went through the dashboards they built for Shirley Ryan AbilityLabs for denials management and data quality monitoring.

Denials Management

Using visualizations claims managers at Shirley Ryan AbilityLabs can point to patterns of denials and work with payers to uncover the root cause.  They can also use these identified patterns of denial to predict higher risk claims and start working with payers early in the processes to avoid final review denials.  

Healthcare analytics dashboard for denials management
Denials Management Dashboard
Note: Not Reflective of Actual Numbers
When claims managers can have all the information about denials at their fingertips, it shifts the conversation with payers from anecdotal to fact-based.  In this 10-min video, you can see the different dashboards and how each is used by claims managers to reduce denials.

Data Quality Monitoring

The challenge of monitoring data quality across multiple departments and different systems is significant but critical for analytics.  Shirley Ryan AbilityLab wanted to take an innovative approach to data quality monitoring by building a single dashboard for analysts to monitor data quality across the network.  

Healthcare analytics dashboard for data quality monitoring
Data Quality Monitoring Dashboard
Note: Not Reflective of Actual Numbers
With data quality threshold alerts and drill-downs, an analyst can identify where data quality issues are increasing and work with departments to adjust processes or conduct additional training. In this 10-min video, see the full dashboard and hear an explanation of how each visualization is working to help improve data quality.

You can download the Sagence and Shirley Ryan AbilityLab customer story, here.  It details the full solution architecture and additional use cases.

Tuesday, October 17, 2017

Knowi Product Update Q3 2017

Know Product Update Q3 2017

You can see the exciting new capabilities described below in action. Lorraine Williams, Head of Success at Knowi, demonstrated them recently and we recorded it. To watch the replay, click the button below

Register for product update webinar

Expanded Machine Learning Capabilities 

For supervised learning, you now can select algorithms for either classification or regression model, meaning you can now predict continuous values (i.e., housing prices in Boston) or predict categories or classes (i.e., likelihood of a person to default on credit card payment).

Stats, Stats Everywhere
The system now allows you to view the statistical metadata about your datasets such as the total number of rows and columns, max. and min. values, mean and standard deviation. The dataset overview can be viewed by selecting the bar chart icon on the Analyze Grid. For more detailed analysis, pairwise scatterplots of the interaction of each data variable with its peer are also available from the overview.
Knowi Data Statistics

Filter Like You Mean It

Generating filter values based upon a separate query 
The system now supports filtering based on the results of another query. This dynamic filtering capability is achieved by first creating a query that returns possible filter values and then selecting the database icon next to add/remove filter buttons. Clicking this option will set the auto-suggestions based upon the secondary query results.

Knowi filter values

Setting the Filter Audience
The system now offers the options when setting filters at both the Dashboard and the Widget level. You can set a personal filter that is only seen by you.  Admins and Dashboard owners can set a global filter which acts as a default filter for all users and admins can reset filters which resets any personal filters back to the global default set by the Admin or Owner

Multi-value user filter support
An admin can now add to a user's profile user specific filter parameters to be passed into queries upon user login

Being RESTful

Added the ability to add paging to a REST-API datasource.  The system will automatically loop through multiple pages to collect data when some tokens are defined.

The system now supports the concept of a Loop JOIN. This type of join allows you execute and retrieve the results for the first part of the join and for each row (from the resulting set) extract the template value, update the second query in the join, execute it then combine the result with the current row.

Other Cool Stuff

Adding Steps to the Ad-hoc Grid
After creating your query and previewing the data returned in the ad-hoc grid, the ability now exists to add multiple steps to the same data query workflow.

Learn More
Grid Formatting  A new feature has been added that allows for alignment of data in the Data Grid widget type. The data grid also supports conditional formatting of colors based upon content value. Any formatting made to the data in the grid will be passed through into any subsequent PDF exports.

Learn More 
Automated Dashboard Sharing There may be cases when any asset that the user creates needs be automatically shared to other groups. In such cases, you can apply an 'Automatic Share to Group' setting that will automatically publish any assets created by the user to those groups that can be used by other users.

Learn More
 New Datasources Knowi has added native integration with Snowflake, a cloud-based SQL data warehouse.

Learn More
New Visualizations Threshold
This visualization allows for the simple tracking of your key metrics. A user can:
  • Select the metric to monitor
  • Enter a threshold value
  • Choose the display color for when the metric is <= the threshold
  • Choose the display color for when the metric is > the threshold
Data Summary
Display the data in summary form. (Ex. Total messages delivered, opened, etc.)