Friday, September 6, 2019

Online cURL tool to authenticate JWT Bearer tokens

As Originally Posted on Medium by Manny Ezeagwula. To read the original article click here.


Postman has a great interface with support for numerous authorization flows. But nothing beats the simplicity of running your own cURL request to quickly retrieve a Bearer token.

Now if at this point, you’re wondering what a Bearer token is, this is an excellent piece on JWT and a hands-on to set one up.

However, if the command line terrifies you (that’s okay, we’ve all been there), Knowi’s online Quick cURL tool makes the experience easy and fun.

In this example, we’ll look at issuing a cURL command to retrieve a JWT token in 3 steps:

  1. head to https://knowi.com/curl
  2. enter request: curl -u knowi:carb0n https://api2.watttime.org/v2/login
  3. hit submit and that’s all!



The best part, Quick cURL parses your JSON response into a tabular format, without lifting a finger. Isn’t that nice!!

Give it a try with any cURL GET or POST requests or check out their other tool to parse JSON and build visualizations from REST API responses.


For more great information or to start your own FREE POC click here.

Tuesday, August 27, 2019

Mergers, Acquisitions and The Future of Data


The data industry is changing. As Game of Thrones grows in popularity, it seems the data industry is trying to follow trend with many mergers, acquisitions and takeovers one after the other. Over the past couple of months we have observed:
  • Salesforce’s acquisition of Tableau 
  • Google’s acquisition of Looker
  • The merger of Sisense and Periscope
  • Logi Analytics buyout of Zoomdata

What does this mean for the rest of the industry and its customers? 

To some degree, we knew this was going to happen and will continue to happen. When we observe explosive growth of solutions within a single category, consolidation is always an eventuality. 

Whilst for some this can rejuvenate and improve their companies productivity, not all joining of forces happens easily. Once companies begin to cobble together, the difficulty of figuring out how to work as a team and architecturally make two completely different companies work seamlessly together creates a risk to both sides. How successfully this match-making will be remains to be seen. 

We have already seen major questions and lawsuits brought forward by stockholders of Tableau who were not happy with the all-stock acquisition by Salesforce. Yet, as predicted by Knowi CEO, Jay Gopalakrishnan in ‘Coffee with Knowi ep. 1’, those suits turned out to be nothing more than noise as the $15.7B deal closed in just 2 months.

Why is all this relevant?

The movement in the market, mainly by what we consider Generation 2 companies- those born around 2008-2012- sets up the Analytics industry for the future. And a big part of that future is the implementation of Artificial intelligence (AI) and Machine Learning into virtually every aspect of our lives. Just as it’s been elsewhere, it’s implementation into the data industry is revolutionary. This feature takes data beyond what any human has ever been able to do, combining hindsight and foresights to take action from anomalies, trends and other data uses that have been missed in the past. The machines can detect patterns in the data that would have been missed otherwise, and will give businesses the ability to transform moving into the future. 

Combined with Business Intelligence (BI) and the ability to question your data in a Google like fashion using Natural Language Processing (NLP) on a single platform, data is going to change the way business is done all over the world. 

Where does this leave Knowi?
As a Generation 3, multi-functionality platform, Knowi already has all the necessary tools to provide an end-to-end experience for its customers. The implementation of AI, BI and NLP to it’s platform creates an additional layer that accentuates the customer experience beyond what has been possible in the past, and allows real-time viewing of data insights with the ability to create visualizations that fit any need. 
With all the technology in the palm of our hands the future is bright. There is no better time than right now to jump on board and experience the beauty of a multi-functionality platform that can take you into the future. 



Learn More: Check out CEO, Jay Gopalakrishnan and COO, Ryan Levy talking all things acquisitions and mergers here. Or, to sign up for a free 21 day POC click here.


Monday, August 26, 2019

Parse cURL online with Analytics and Visualization

As Originally Posted on Medium. Written by Manny Ezeagwula. To View original post Click Here.

With the endless number of tools to call a REST API, cURL remains one of the easiest ways to issue an HTTP request. Not to mention almost all API providers offer sample cURL commands. Now if you’re just coming out of that cave, cURL is a fantastic command-line tool to construct almost every HTTP action allowed in a browser such as GET, POST, PUT, Headers, and many more.
While it’s been relatively easy to make an HTTP request to extract or GET data, what hasn’t been straightforward is a way to parse and analyze the data returned from an HTTP request until now.

What is Quick cURL?
Knowi’s Quick cURL is a lightweight and easy-to-use online tool to execute command line requests and parse the response from xml, json, csv to tabular format.

Using Knowi’s Quick cURL tool we can issue an HTTP request to Quandl’s API to retrieve financial data, parse the results and apply advanced analysis. Quick cURL currently supports interaction with REST APIs using GET or POST commands.

The cURL sample command used is below if you want to give it a try:

To parse and analyze the returned data, we can leverage Knowi’s Natural Language Processing (NLP) to ask questions and get answers instantly.


The best part, Quick cURL allows you to save and share the output with a revision history with other users.

Try running an online curl command now!




Want to do more:

Friday, August 2, 2019

MongoDB Aggregations — Part 2

As originally posted on Medium on July 22nd by Nate Hall. Click here to go to the original source


Data Blends with Knowi


MongoDB is an open-source, NoSQL database built to simplify storage of large, document-based, unstructured data. This article is the second of a 3-part series on MongoDB analytics, with the purpose of showing how to blend data stored in MongoDB with other databases for unified data exploration using Knowi.


MongoDB Aggregations — Part 1explored how to perform aggregations inside MongoDB — including examples of a few important operations to prepare data and learn proper syntax.
In part 2, we’ll dive into the new Mongo Atlas aggregation pipeline builderand how to blend MongoDB data with other sources using Knowi.

The MongoDB Atlas aggregation pipeline builder update was released early in June, 2019. This allows MongoDB users a new way to test and run aggregations using MongoDB Atlas. Testing aggregations before deploying is key to maintain application stability and avoid “hours of trial and error”.
To start using the new Aggregation Pipeline Builder in the MongoDB Atlas cloud — click to the Collections view, and choose “Aggregation” next to the Find & Indexes tabs, as shown below:

From the drop-down menu, different aggregation “stages” can be tested, with auto-completion for operators to perform the assigned aggregation at each stage. This enables simplified testing & learning of 25+ different aggregation stages and the syntax behind them.
Once data inside MongoDB has been aggregated, the next step of “data engineering” usually requires joining data in MongoDB with other structured & unstructured databases — aggregating data across sources. This is done to contextualize information across the tech stack through a variety of methods, such as ETL, connecting via ODBC drivers, and data warehousing.
Depending on the complexity of the data stack, these methods are increasingly time-intensive — requiring teams of data engineers to select relevant data for downstream applications, make sure that the data is in relational format by flattening nested, unstructured data (eg. collections in MongoDB) and then load it into another data warehouse before analysis.
Knowi can be used to instantly explore data sets, cleanse messy data with SQL, blend multiple information stores using common join-keys, and build visualizations or downstream applications with Natural Language Intelligence; enabling shortened analytics product development cycles

The first step to joining data across databases with Knowi is to “sign-up” for an account at www.knowi.com
Once you’ve signed up you’ll be moved to the front-page of Knowi’s interface. Navigate to the “data sources” tab in Knowi and select “New Datasource” button and select the option for MongoDB or MongoDB Atlas depending on how you’re team deploys MongoDB.
To connect to a MongoDB instance, enter your host-id, port #, database name, log-in credentials. The other properties (database properties, agent, & SSH Tunnel) can be used to simplify integration alongside data security protocols.
For MongoDB Atlas, all that is needed to explore data in Knowi is the Atlas Connection String.


Exploring MongoDB Atlas collections with Knowi

Once the MongoDB instance has been connected, the contents of accessible collections can instantly be returned and explored using the data explorer UI on the left-hand side of the Knowi query screen. This enables drilling into the contents of individual documents inside collections, regardless of how nested the data is — as shown with the example of Visitor Team Statistics, which is nested in 5+ layers of data.
Data exploration is important because it enables users to evaluate whether data transformation is necessary to understand the contents of disparate databases. Inside Knowi, the Cloud9QL Query box can be used to complete necessary transformations and aggregations as introduced in part 1.
Once MongoDB collections have been connected, explored, and confirmed as usable inside Knowi — the join function can be used to blend MongoDB alongside any other NoSQL, SQL, or API-centric database to create a unified, virtualized dataset from multiple sources. To test out blending MongoDB data in Knowi yourself, check out this walk-through — which shows how to join MongoDB with a relational, MySQL database.
Knowi can connect and join any combination of 35+ structured and unstructured databases including leaders in the NoSQL space like CouchBase and Cassandra (DataStax). Once data-sources have been connected to Knowi — building a joined data set becomes intuitive.
For this example, we’ll blend data from MongoDB Atlas and MySQL:

Joining marketing data from MongoDB Atlas with customer location data from MySQL
By specifying “customer” as a common join-key between marketing data in MongoDB Atlas & customer-location data stored in MySQL data cross silos can be blended without prior reformatting or flattening. Joining these data-sets across Mongo and MySQL creates a unified view of data in minutes, without need for ETL workload to process different data structures.
When an organization’s data is running through NoSQL databases like MongoDB — it is no longer necessary to install ODBC drivers or ETL processes to join that data with other sources of information, enabling faster generation of insight across disparate data using natural language processing.
With Knowi, queries can be executed across data silos without extensive engineering resources. Combined with an end-end analytics product including visualizations, machine-learning based AI, and external data aggregation capabilities for MongoDB and other sources of mission-critical data, Knowi can help consolidate the aggregation process of MongoDB-based data with other components of the enterprise data portfolio.
More information about Knowi’s NLP-driven visualization on MongoDB can be found here, and will be the focus of MongoDB Aggregations - Part 3.
Learn More: To try this yourself, Sign-Up for a 21 day Knowi trial. Click here

Tuesday, July 23, 2019

Knowi’s Platform Simplifies The Process Of Compiling And Analyzing Unstructured And Structured Data

As originally posted on Tech Company News on June 28th, 2019. Click here to view original publishing
Below is our recent interview with Ryan Levy, Chief Operating Officer at Knowi:
Q: Could you provide our readers with a brief introduction to Knowi?
A: Knowi is an Augmented Analytics company. Our platform simplifies the process of compiling and analyzing unstructured and structured data. Recently, we developed an anthem to help consumers understand what we do in a simplistic manner: ‘Any, any, any, any.’ This breaks
down to any data, anywhere, any size, for anyone.
Knowi instantly connects to any data, no matter what the data is; structured, unstructured or modern/messy data. And we don’t care where the data is – on your site, in the cloud, on shore or global, our platform connects to your data from anywhere. Your data could be any size – small, medium, large, big data, massive data; our platform was made for this.
Where it starts to get most exciting is the anyone – the people. You no longer need to have technical skills or submit a request to a technical team member to provide you with insights to your data. Users simply login to Knowi and can access and view dashboard widgets or build customized reporting on the spot. Your data; your way.

Q: Can you give us insights into your platform?
A: Knowi is not just another BI or analytics tool. We are a full-fledged, end-to-end, Augmented Analytics platform. That means that we are a platform composed of multiple modules that address the needs of our customers and provide the capability and functionality they need in order to see the data the way they need to see it.
At the core of our platform is our Data-as-a-Service. This is where our ‘data science engine’ allows users to natively and instantly connect to all and any disparate data sources (structured, unstructured or everything and anything in between) and then extracts the data to create visualization dashboards. These visualizations are completely customizable based on the unique needs of each of our customer’s businesses and are easy to understand and manipulate without any technical knowledge. It doesn’t matter who you are in the organization, you have the ability to see your data how you want it, when you want it.
And if that wasn’t enough data power for you already, we then layer AI on top in the form of machine-learning (encompassing Classification, Regression and Anomaly Detection).
The whole premise around Knowi is to provide a simple, self-service BI platform that covers everything from connecting to your data, providing the AI piece, the machine learning piece, the natural language processing piece and then the actions and visualizations behind it.
Q: You’ve recently announced the latest release of Natural Language BI 2.0; could you tell us something more?
A: We are super excited about this release because now users can query their data in a “google like” search functionality. Knowi users can use their natural language, without having to understand how to write complex queries or any kind of querying syntax. In addition, you can simply ask a question of your data in real-time to get insights immediately. And just as important, we don’t restrict the data required to answer the question from a single data source, we allow you to ask questions across all your data sources.
Q: Why is now the time to change how we do analytics?
A: For decades, every company has been driven by data; whether you are the pizza shop on the corner or a billion dollar software company. We are looking at numbers, defining our return on investments, deciding what is our next move to grow – and it’s all data. But the data has become unmanageable and silo’d. We have made it easy to collect data but we haven’t really focused much on how to make it easier to extract and analyze that data.
The old way of running analytics and managing these data sources to provide some kind of visualization has been a complex process. It’s been intensive from a resource, and a cost perspective, because you’ve previously been required to use multiple tools and multiple people to do something to your data. Which means by the time you get your answer, the data may have changed entirely.
Also consider this, analytics as we know it has traditionally been a backwards view. Most tools today look at data that has already taken place and that is what they are designed to do. With Augmented Analytics, we are now combining the concept of AI and BI to be able to create models and give customers the ability to see what may happen and what actions may be taken. This is what you may have heard of as predictive and prescriptive analytics.
The reason why we built the platform that we did, and why we didn’t just build a visualization tool or a Data-as-a-Service tool, is because we want it to be scalable and to grow with the industry. We will always have data, so let’s fix the way we use it and access it now so that it continues to work for us in the future.
Q: You’re not a typical silicon valley startup – what makes you different?
A: The way the company was originally formed was an idea to address modern data architecture complexity. Built truly out of a basement in Oakland and boot strapped with no institutional funding, we waited for a substantial period of time to prove that our product was viable and could do what we said it could, before we actually recruited customers and users.
Generally, startups will raise funds based on a concept, and use those funds to build a team and then go out and procure customers. We kind of did it backwards in that respect. As a generation 3 platform, our vision is to lead the wave of Augmented Analytics solutions that will transform how enterprises are run. We believe we have a great customer base currently and have allowed ourselves plenty of room to grow.

Q: What are your plans for the future?
A: From a business perspective, we’ll continue to be a leading force in Augmented Analytics. Knowi is different than other tools available to businesses today and we will continue to focus on how our platform allows organizations to use data to transform their business states.
We will continue to focus our investment, our expertise and our go to market around not just Augmented Analytics and the view of data as of today, but really driving true value out of where this value is going to go.
In the end, it is really more about redirecting how we can help organizations understand what they can actually do with the data and how valuable the data is rather than trying to figure out how we are going to untangle it and make sense of it. And that is the core principle behind Knowi.





Learn More: For a free 21 day trial to enhance your data analytics and be a part of the movement towards Augmented Analytics click here.


Monday, July 22, 2019

MongoDB Aggregations — Part 1

As originally posted on Medium on June 25th. Click here to go to the original source

Getting Started Aggregating Data with MongoDB



MongoDB is an open-source, NoSQL database built to simplify storage of large, document-based, unstructured data. This article is the first of a 3-part series on MongoDB analytics, with the purpose of showing how to aggregate data in MongoDB and learn correct MongoDB query syntax using Knowi.

Why put data in MongoDB?

Unstructured data has become more prevalent throughout the past decade as the number of collection points have increased across most business technology stacks — with the IDC estimating that 80% of enterprise data remains unstructured.
Each new collection point provides a different lens to view an organization’s health: mobile data is growing exponentially from phones & laptops across the world, while text-based information such as customer support conversations and web-page traffic provide new ways to understand the channels of communication that drive every forward-thinking business. The amount of unstructured information across business ecosystems will continue to expand dramatically throughout the 2020’s.
Given the velocity and volume of data from these sources, MongoDB offers a premier, NoSQL solution to flexibly store, index, and query the proliferating mass of unstructured data. Unlike relational databases, MongoDB does not require a schema defined upfront; each data object is stored as a separate document inside a “collection”. Queries on MongoDB can be executed ad-hoc to return data based on fields, ranges, or regular expression using Javascript.
Perfect for building fast-scaling apps, Mongo is simple to set-up. The rest of this article will explore how to aggregate MongoDB-based data to prepare it for downstream purposes.

How to aggregate data in MongoDB

Most organizations run queries against MongoDB using the default Javascript command line client. However, MongoDB can also be queried using Python, PHP, C#, Perl, Ruby, or MongoDB Compass GUI.
Here’s an example of how to execute a query in MongoDB:


This will return all of the metrics associated to the ‘Collection to query’ specified inside MongoDB.
Aggregation is critical for processing data to return computed results. In Mongo, aggregations can be used to group values from multiple documents and perform calculations on the grouped data to return a single result. This is a vital step to prepare data for analytics, as aggregating unstructured data enables teams to find trends and correlations between data-points and prepare for downstream analytics functions.
Inside MongoDB, there are three main ways to aggregate data: the aggregation pipeline, the map-reduce function, and single purpose aggregation methods (links to MongoDB documentation provided).
MongoDB’s aggregation framework is modeled on the concept of data processing pipelines. Documents enter a multi-stage pipeline that transforms the documents into an aggregated result. A few important examples include -

  • $group — groups documents by specified expression & outputs to next stage by distinct grouping characterized by _id field. Outputted documents can include accumulator expressions as part of the grouping by _id field. This is expressed as:



  • $filter- will put a specific subset based on specified filter condition including only elements that match the condition. This is expressed as:


  • $match — can be used to filter the number of documents passed between stages. Match should be used early in the aggregation pipeline. This is expressed as:

  • $limit — limits the documents for the next stage by specified number, only passing through the amount of documents specified. This is expressed as:


  • $project — pass documents with specified fields to next stage, helping aggregate data by specific categories. This is expressed as:


These aggregations can be used to drive functionality from data including building analytics visualizations, machine-learning prediction workflows, and pushing data into applications. For more specifics on different aggregation methods possible inside of MongoDB, check out their documentation page here.

Learning & Practicing MongoDB aggregations using Knowi

Knowi is an augmented analytics platform that enables teams to create queries on NoSQL databases like MongoDB, Couchbase, and Cassandra using a point & click interface. Knowi can be used to generate queries on MongoDB, and review proper syntax for aggregating data. Let’s walk through an example of setting up a MongoDB aggregation in Knowi
First — head to Knowi’s MongoDB Querying page. From here, you can immediately access a cloud-hosted live demo of MongoDB database, start running queries, and aggregating data using Knowi on the cloud.



Second — in the “Query Builder” section — click on Collections & choose “sendingActivity”. Notice that as you changed the MongoDB collection, the native MongoDB query generator automatically built the query under the “query editor”. This is a great way to learn how to write aggregations and queries in MongoDB, feel free to try the different steps with other collections hosted in the Knowi trial database or your own MongoDB data.
Third — let’s run through an aggregation. Click the drop-down for “Measures and Groups” & click into the metrics box. Select “customer” and “sent” as the metrics to query. Notice that as each field is selected, the query automatically updates on the right side of the screen.

Double click the box for “Sent”, in the operations box choose “Sum” and Ok. In the query-editor on the right, you can immediately see that the query has been edited with the sum aggregation included to look like. This functionality can be used to see how to write MongoDB aggregations inside Knowi




Using Knowi’s UI, any MongoDB novice can quickly begin writing queries to help unlock their understanding of the data available, and the best aggregations to perform on said data. Now let’s try a grouping aggregation using Knowi. Press into the “Dimensions/Group By” box and select “date” — this will be immediately reflected in the query editor, where the $group sequence has been completed with date as the grouping id.



We’ve now performed two aggregations in MongoDB using Know including a summation of the number of sent messages in a Mongo collection & grouping of documents by date. The value of this will become apparent when you select “Show me”, as the result of aggregations can be immediately displayed.
In the upcoming MongoDB Aggregations — Part 2, we’ll explore how Knowi can help you blend collections of data in MongoDB with other sources of unstructured data like CouchBase and DataStax, as well as relational data systems like PostgreSQL or Snowflake.


Learn more about Knowi's ability to blend data in MongoDB by visiting our website and starting your own 21 day free trial