Tuesday, August 27, 2019

Mergers, Acquisitions and The Future of Data

The data industry is changing. As Game of Thrones grows in popularity, it seems the data industry is trying to follow trend with many mergers, acquisitions and takeovers one after the other. Over the past couple of months we have observed:
  • Salesforce’s acquisition of Tableau 
  • Google’s acquisition of Looker
  • The merger of Sisense and Periscope
  • Logi Analytics buyout of Zoomdata

What does this mean for the rest of the industry and its customers? 

To some degree, we knew this was going to happen and will continue to happen. When we observe explosive growth of solutions within a single category, consolidation is always an eventuality. 

Whilst for some this can rejuvenate and improve their companies productivity, not all joining of forces happens easily. Once companies begin to cobble together, the difficulty of figuring out how to work as a team and architecturally make two completely different companies work seamlessly together creates a risk to both sides. How successfully this match-making will be remains to be seen. 

We have already seen major questions and lawsuits brought forward by stockholders of Tableau who were not happy with the all-stock acquisition by Salesforce. Yet, as predicted by Knowi CEO, Jay Gopalakrishnan in ‘Coffee with Knowi ep. 1’, those suits turned out to be nothing more than noise as the $15.7B deal closed in just 2 months.

Why is all this relevant?

The movement in the market, mainly by what we consider Generation 2 companies- those born around 2008-2012- sets up the Analytics industry for the future. And a big part of that future is the implementation of Artificial intelligence (AI) and Machine Learning into virtually every aspect of our lives. Just as it’s been elsewhere, it’s implementation into the data industry is revolutionary. This feature takes data beyond what any human has ever been able to do, combining hindsight and foresights to take action from anomalies, trends and other data uses that have been missed in the past. The machines can detect patterns in the data that would have been missed otherwise, and will give businesses the ability to transform moving into the future. 

Combined with Business Intelligence (BI) and the ability to question your data in a Google like fashion using Natural Language Processing (NLP) on a single platform, data is going to change the way business is done all over the world. 

Where does this leave Knowi?
As a Generation 3, multi-functionality platform, Knowi already has all the necessary tools to provide an end-to-end experience for its customers. The implementation of AI, BI and NLP to it’s platform creates an additional layer that accentuates the customer experience beyond what has been possible in the past, and allows real-time viewing of data insights with the ability to create visualizations that fit any need. 
With all the technology in the palm of our hands the future is bright. There is no better time than right now to jump on board and experience the beauty of a multi-functionality platform that can take you into the future. 

Learn More: Check out CEO, Jay Gopalakrishnan and COO, Ryan Levy talking all things acquisitions and mergers here. Or, to sign up for a free 21 day POC click here.

Monday, August 26, 2019

Parse cURL online with Analytics and Visualization

As Originally Posted on Medium. Written by Manny Ezeagwula. To View original post Click Here.

With the endless number of tools to call a REST API, cURL remains one of the easiest ways to issue an HTTP request. Not to mention almost all API providers offer sample cURL commands. Now if you’re just coming out of that cave, cURL is a fantastic command-line tool to construct almost every HTTP action allowed in a browser such as GET, POST, PUT, Headers, and many more.
While it’s been relatively easy to make an HTTP request to extract or GET data, what hasn’t been straightforward is a way to parse and analyze the data returned from an HTTP request until now.

What is Quick cURL?
Knowi’s Quick cURL is a lightweight and easy-to-use online tool to execute command line requests and parse the response from xml, json, csv to tabular format.

Using Knowi’s Quick cURL tool we can issue an HTTP request to Quandl’s API to retrieve financial data, parse the results and apply advanced analysis. Quick cURL currently supports interaction with REST APIs using GET or POST commands.

The cURL sample command used is below if you want to give it a try:

To parse and analyze the returned data, we can leverage Knowi’s Natural Language Processing (NLP) to ask questions and get answers instantly.

The best part, Quick cURL allows you to save and share the output with a revision history with other users.

Try running an online curl command now!

Want to do more:

Friday, August 2, 2019

MongoDB Aggregations — Part 2

As originally posted on Medium on July 22nd by Nate Hall. Click here to go to the original source

Data Blends with Knowi

MongoDB is an open-source, NoSQL database built to simplify storage of large, document-based, unstructured data. This article is the second of a 3-part series on MongoDB analytics, with the purpose of showing how to blend data stored in MongoDB with other databases for unified data exploration using Knowi.

MongoDB Aggregations — Part 1explored how to perform aggregations inside MongoDB — including examples of a few important operations to prepare data and learn proper syntax.
In part 2, we’ll dive into the new Mongo Atlas aggregation pipeline builderand how to blend MongoDB data with other sources using Knowi.

The MongoDB Atlas aggregation pipeline builder update was released early in June, 2019. This allows MongoDB users a new way to test and run aggregations using MongoDB Atlas. Testing aggregations before deploying is key to maintain application stability and avoid “hours of trial and error”.
To start using the new Aggregation Pipeline Builder in the MongoDB Atlas cloud — click to the Collections view, and choose “Aggregation” next to the Find & Indexes tabs, as shown below:

From the drop-down menu, different aggregation “stages” can be tested, with auto-completion for operators to perform the assigned aggregation at each stage. This enables simplified testing & learning of 25+ different aggregation stages and the syntax behind them.
Once data inside MongoDB has been aggregated, the next step of “data engineering” usually requires joining data in MongoDB with other structured & unstructured databases — aggregating data across sources. This is done to contextualize information across the tech stack through a variety of methods, such as ETL, connecting via ODBC drivers, and data warehousing.
Depending on the complexity of the data stack, these methods are increasingly time-intensive — requiring teams of data engineers to select relevant data for downstream applications, make sure that the data is in relational format by flattening nested, unstructured data (eg. collections in MongoDB) and then load it into another data warehouse before analysis.
Knowi can be used to instantly explore data sets, cleanse messy data with SQL, blend multiple information stores using common join-keys, and build visualizations or downstream applications with Natural Language Intelligence; enabling shortened analytics product development cycles

The first step to joining data across databases with Knowi is to “sign-up” for an account at www.knowi.com
Once you’ve signed up you’ll be moved to the front-page of Knowi’s interface. Navigate to the “data sources” tab in Knowi and select “New Datasource” button and select the option for MongoDB or MongoDB Atlas depending on how you’re team deploys MongoDB.
To connect to a MongoDB instance, enter your host-id, port #, database name, log-in credentials. The other properties (database properties, agent, & SSH Tunnel) can be used to simplify integration alongside data security protocols.
For MongoDB Atlas, all that is needed to explore data in Knowi is the Atlas Connection String.

Exploring MongoDB Atlas collections with Knowi

Once the MongoDB instance has been connected, the contents of accessible collections can instantly be returned and explored using the data explorer UI on the left-hand side of the Knowi query screen. This enables drilling into the contents of individual documents inside collections, regardless of how nested the data is — as shown with the example of Visitor Team Statistics, which is nested in 5+ layers of data.
Data exploration is important because it enables users to evaluate whether data transformation is necessary to understand the contents of disparate databases. Inside Knowi, the Cloud9QL Query box can be used to complete necessary transformations and aggregations as introduced in part 1.
Once MongoDB collections have been connected, explored, and confirmed as usable inside Knowi — the join function can be used to blend MongoDB alongside any other NoSQL, SQL, or API-centric database to create a unified, virtualized dataset from multiple sources. To test out blending MongoDB data in Knowi yourself, check out this walk-through — which shows how to join MongoDB with a relational, MySQL database.
Knowi can connect and join any combination of 35+ structured and unstructured databases including leaders in the NoSQL space like CouchBase and Cassandra (DataStax). Once data-sources have been connected to Knowi — building a joined data set becomes intuitive.
For this example, we’ll blend data from MongoDB Atlas and MySQL:

Joining marketing data from MongoDB Atlas with customer location data from MySQL
By specifying “customer” as a common join-key between marketing data in MongoDB Atlas & customer-location data stored in MySQL data cross silos can be blended without prior reformatting or flattening. Joining these data-sets across Mongo and MySQL creates a unified view of data in minutes, without need for ETL workload to process different data structures.
When an organization’s data is running through NoSQL databases like MongoDB — it is no longer necessary to install ODBC drivers or ETL processes to join that data with other sources of information, enabling faster generation of insight across disparate data using natural language processing.
With Knowi, queries can be executed across data silos without extensive engineering resources. Combined with an end-end analytics product including visualizations, machine-learning based AI, and external data aggregation capabilities for MongoDB and other sources of mission-critical data, Knowi can help consolidate the aggregation process of MongoDB-based data with other components of the enterprise data portfolio.
More information about Knowi’s NLP-driven visualization on MongoDB can be found here, and will be the focus of MongoDB Aggregations - Part 3.
Learn More: To try this yourself, Sign-Up for a 21 day Knowi trial. Click here