Thursday, June 1, 2017

Knowi Product Update - Machine Learning; Enterprise Management API; New Name!

Since our Q1 Update, a few things have changed, namely our name!  You've probably already noticed that back in late April, we became Knowi.  We've been planning a name change for a while and with the latest set of product releases, the timing seemed right to make the change from Cloud9 Charts to Knowi.  We hope you like it.

Name changes aside, we are excited to tell you about a number of new features we added this past quarter.  We've been hard at work adding:

AI + BI for Modern Data

We're pretty proud to be the first in the market to combine all the power of our Business Intelligence platform for unstructured and structured data with integrated Artificial Intelligence. With AI + BI, you can blend hindsight and foresight to drive actions from your data for any number of use cases. Not only that, it also enables you to offer value added machine learning capabilities to your customers, for embedded use cases. We've integrated powerful machine learning capabilities into the platform including:
  • Built-in algorithms (you can also upload your own proprietary algorithms)
  • Evaluation statistics, like accuracy and precision, to help you determine best algorithm
  • Model training using historic data or upload a training dataset
  • Save the trained model and add it to any relevant query with a couple of clicks
  • Configure actions to be taken based on query results, including triggering a downstream application or sending a notification

This powerful new feature is currently in beta and we will be adding other algorithms to it over the next few weeks. Sign on or Sign up to try it out. To get more details, read our machine learning documentation.

Management API

We've developed a public Management API to help our enterprise customers manage various aspects of the Knowi platform. The API is RESTful allowing external services and apps to manage users and groups, datasources, queries, dashboards and widgets programmatically.

To get the details, read our API documentation which also includes code examples.

Fine-Grained User-Level Filters and Enterprise Access Controls

The combination of content filters, both at a user level and shared URL level, and query parameters allow admin users to limit user access to a subset of available data on the same dashboard or widget. Filters can be applied in one of two ways:
  • Filters are passed through as query parameters, at runtime, for direct queries
  • Filters are applied on top of the data returned by a query
Read more details about User Filters in our documentation.


Knowi admins now have more granular control to automate the permissions for sharing data assets (dashboards, queries, datasources, agents). By default, assets are private to the user, unless shared to other users or groups. Now, admins can configure assets so they are automatically shared within a group. Furthermore, each asset can be configured for granular read or edit access at a group or an individual user level.

For details on new access controls, please reference our permissions documentation.

Searchable Knowledge Base and Introducing Lorraine Williams, Head of Success

We are excited to introduce Lorraine to you. Lorraine heads up product management and customer success. We've known Lorraine for a while as a customer, and she comes to us with many years of product and engineering management experience. She's also really good at math having just completed her MS in Analytics.

Going forward, she will ensure our customers get the most out of the Knowi platform from day one. She also will work closely with the Engineering team to ensure we continue to deliver functionality important to our customers and our long-term product vision.

Our customers will see some immediate changes in the way we communicate product updates, including the posting of weekly release notices in our new searchable Knowi Knowledge Base. Yay, Lorraine!

OTHER COOL STUFFF

NEW VISUALIZATIONS
We've added Waterfall Charts, Box-Plot Diagrams and the ability to embed an external Webpage as a widget in your dashboard. You can also now customize the CSS of dashboards for custom branding
CATEGORIZING WIDGETS AND DASHBOARDS
To help manage large numbers of widgets and dashboards, we've added the ability to categorize or "tag" widgets and dashboards so you can group related items together. When saving a new widget or dashboard, you can create a new category (names will be auto-suggested) or select an existing category from a list.
DATA EXPLORER
To make it even easier for you to understand, discover, and explore your multi-structured and unstructured data sources, Data Explorer performs a "schema on read" to sample your data and glean from it relationships and any possible structure. You can drilldown into nested data and drag and drop fields into query parameters.

Data Explorer is available from the Query Builder page and automatically appears on the left side as soon as you select a datasource.
ENHANCED PDF REPORTS
For PDF reporting, we've enhanced the layout so if a large table/grid is part of the dashboard the full table will be render in the PDF. We've also added headers and footers to make multi-page reports easier to read.
NEW DATASOURCE
We've added support for PRESTO. Presto is an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes ranging from gigabytes to petabytes.
Try Knowi Free 
https://www.knowi.com/

Tuesday, May 9, 2017

We're Hiring!

We are actively looking for Sales Development Representatives and have interviewed a number of candidates to-date but are still looking.  I'd actually like to share the experience we had with one of them to maybe help those of you looking to stand out from the pack.

First, this candidate found us.  She researched company profiles looking for the type of company she wanted to join.  She then found our CEO on LinkedIn and emailed him describing how she be an asset to us and asked to have a quick phone conversation.

She is a graduate of alwayshired.com which is a Sales Bootcamp program, at least according to their website.  We have no affiliation with them, but if this candidate is a good representation of the quality of their graduates, then they seem to know what they are doing.

Anyhow, we did the regular interview process eventually offered her a position.  We knew she was pretty green, so why did we offer her a job?
  • She hustled to get an interview and, in doing so, demonstrated her sales methodology
  • She confidently pitched herself but without being obviously boastful or arrogant
  • She was genuine and shared personal experiences and stories (not in a weird way)
  • She had a 100% positive attitude
  • She demonstrated a high level of personal integrity and character
Cloud9 Charts is Knowi
Like a good SDR, she had secured a few leads in her pipeline, and we were just one of them. Unfortunately, for us, we lost out to another company closer to her home.

The following week, we changed our name from Cloud9 Charts to Knowi.  We were totally surprised to see her walk through the door carrying a cake.  Yes, she bought us a cake!  Now, buying us a cake to celebrate our name change after not taking our job offer is arguably going above and beyond but who doesn't love cake?

If you haven't picked up on it yet, it's your character and attitude that makes you stand out in an interview.  You should assume everyone has similar skills and experience, so it's about allowing people to see who you are in a genuine way.  It sounds simple but is not.  That is why, as a hiring manager, when you see it that person immediately stands out.

Thanks, Anna Aleksenko.  We loved the cake!

We've are still looking for the next SDR superstar in the making.  Are you the one?  You can see the full job description and submit your application here:  https://www.localwise.com/job/17025-sales-development-representative/15365-knowi-oakland-ca

Our Open Positions:

Enterprise Account Executives and Sales Development Specialists
We are looking for unproven legends in the making. We are a startup poised for exponential growth this year, and you'll play a crucial role in making that happen.

You'll work directly with our founding team with deep experience in the space. Knowledge and skills you learn here will put you in good stead for years to come.

If you can talk the talk and walk the walk, you'll have unprecedented growth opportunities as the company scales up.

What you need:
  • Lots of ambition, hunger, and a deeply rooted will to succeed.
  • "Whatever it takes" attitude to not only bring in new customers but most importantly, make them successful.
  • Minimum 1-2+ years of sales experience, preferably in a SaaS role for the Sales Development Specialist. 5+ years of experience for Enterprise Account Executives.
  • An upstanding individual with unquestionable honesty, integrity and leadership potential.
  • Quick learner with the intellect and technical aptitude to distil complex solutions with the ability to communicate that succinctly to technologists and executives.
  • We prize an entrepreneurial spirit with the right make-up to inspire and lead as we grow.
Think you have what it takes? Become that legend. We'd love to hear from you: sales@knowi.com.

Engineering & Design
We are currently hiring Java Engineers, UI and UX Designers and would love to hear from you. Some samples of your code/designs or any of your projects in GitHub would also be helpful. Email us at info@knowi.com.

Support & Customer Engineering
  • Prior Experience in the data space.
  • The right Aptitude and Attitude.
  • Work closely with the founding team and sales.
  • Quickly come up to speed on our product, with the ability to communicate and distil complex technical questions to suggest solutions.
  • Minimum of 4 years of experience in a customer facing technical role.
  • Solutions Architects
  • We are looking for a unique mix of technical prowess, customer facing skills and a magnetic personality to lead pre-sales and post-sales engineering. 
  • Prior Experience in the data space.
  • Proficient in SQL, NoSQL, and various database technologies.
  • Prior experience in Analytics and BI a plus.
  • Work closely with the sales team, product, and engineering.
  • Provide delightful customer experiences and solve their technical problems.
  • At least 5 years of experience in a customer facing technical role.
If you are interested in this role, email us at info@knowi.com.

Thursday, April 27, 2017

Knowi on MongoDB Atlas - Tutorial

This is a 10 minute, hands-on tutorial of setting up connectivity to MongoDB Atlas and building visualizations from it using Knowi. The end result is a dashboard of restaurants near the NYC area. In a previous post, we reviewed MongoDB Atlas.



CONNECTING

This assumes that you have an Atlas account setup. (More details on setting up your database on Atlas here).  

  1. Importing data into Atlas: 
b) Use MongoHub or mongoimport to import the JSON file into a collection.  
   
The JSON structure looks like this, with some nested elements:

{
  "borough": "Bronx",
  "cuisine": "Bakery",
  "name": "Morris Park Bake Shop",
  "restaurant_id": "30075445",
  "address": {
    "building": "1007",
    "coord": [
      -73.856077,
      40.848447
    ],
    "street": "Morris Park Ave",
    "zipcode": "10462"
  },
  "grades": [
    {
      "date": {
        "$date": 1393804800000
      },
      "grade": "A",
      "score": 2
    }
  ]
}
2. Sign up for a free Knowi account.
3. Create a MongoDB datasource connection:
  • Whitelist our IP addresses in Atlas to enable connectivity. (Alternatively, An on-premise way to set up connectivity is also available. See agent docs for more details.)
  • Create a new connection in Knowi via the Datasources icon --> New Datasource --> MongoDB.
  • Add the replicaset hosts into into the Host(s) section. Example: c9demo-shard-00-00-zito5.mongodb.net:27017,c9demo-shard-00-01-zito5.mongodb.net:27017,c9demo-shard-00-02-zito5.mongodb.net:27017
  • Leave the port empty, if you already have ports in the hosts section; specify database to connect to, along with the user and password.
  • Under the 'Database Properties' field, add the following: ssl=true&authSource=admin&replicaSet=<yourRelplicaSetName>
  • Use the 'Test Connection' button to ensure that the connection is successful. Save.
Now the fun begins. Let's bring this data to life.



GENERATING QUERIES



After the datasource is saved, click on the Configure Queries link. This will open up the query page. Let's start with a count of restaurants by borough by cuisine.

  • Open up the Query Generator.
  • Collections should be automatically populated with the collections from Atlas. Choose Restaurant collection. This will trigger our field discovery process to determine fields in the collection.
  • Add borough and cuisine to the dimensions dropdown.
  • Add _id into the Metrics section; click on it to add an count aggregation to it.
  • Notice the auto generated MongoDB query with aggregations.
  • Click on Preview to instantly preview the results.

VISUALIZATIONS

Create a dashboard and add our newly created report/widget into it:


Set up a stacked bar chart with Borough in the X axis, cuisine in the Y axis.




Set up a few filters (via the filter icon):


In a few simple steps, you have your first visualization from data in Atlas.

DRILLDOWNS

Now, let's take this a step further with a drilldown to produce a cluster map of restaurants  for a given 'borough' & 'cuisine.'





Add another query, this time without aggregations and including the nested address object.


Drill into the address object to drag & drop the 'coord' array into the ad hoc analysis grid, along with the 'borough', 'cuisine' and 'name':

 


Set it up as a Geo Marker Cluster Visualization. Note that the coord nested array is automatically unwound as lat/long coordinates on the map.




On Visualization settings, set the Name to the restaurant name, and center the map to NYC (Lat/Long of 40, -74). Save the new query/Widget.


Setup a drilldown from our parent bar chart into the map.






Click on any point on the bar chart to get a cluster map of restaurants for the clicked combination 'borough' & 'cuisine':



SUMMARY
To summarize, we imported data into an Atlas cluster, connected to it using Knowi, generated a few native MongoDB queries on it and set up a few visualizations from it.

Now it's your turn to bring your own data to life!

RESOURCES

Wednesday, April 19, 2017

We've Got Some Exciting News: New Machine Learning Capabilities and a New Name!


We have a couple of exciting updates.

First, our latest release brings all of the power of our Business Intelligence platform and integrates Artificial Intelligence. With this release, you can blend hindsight and foresight to drive actions from your data. Not only that, it also enables you to offer value added machine learning capabilities to your customers, for embedded use cases.

Second, we are rebranding Cloud9 Charts to Knowi. With the latest product updates, it was the right time to pull the trigger on the name update that we’ve been simmering on for some time.


In a short period, we’ve built a leadership position in NoSQL/Polyglot analytics, with customers range from Fortune 500 to startups. Now we are aiming even higher.  With Knowi at the top of your modern data stack, you can blend SQL and NoSQL data and integrate machine learning to deliver smarter data applications.

If you’d like us to go over the Machine Learning capabilities, please contact us. It’s included as part of your service during the beta period for the next four months.

Thank you from all of us for being part of our journey thus far...we look forward to many more years to come.

Onwards & Upwards!

Jay Gopalakrishnan
Founder and CEO, Knowi

Sunday, April 16, 2017

4 Reasons Why Your BI Tool Is Preventing You from Becoming Data-Driven

Becoming data-driven company-wide, where data is integrated into the decision-making process all levels of the organization because employees, partners, and customers can access the right data at the right time, is often the ultimate goal when implementing an analytics solution. 

There are dozens of BI tools, from Tableau to Qlik to more modern tools like Looker, who promise to help achieve this goal.  However, the path is often overwhelmingly difficult because of hidden barriers to implementation and adoption that only reveal themselves as you try to implement.  What many don’t realize is the most common issues are not on the front-end, where end users enjoy high-quality experiences with most traditional BI tool, but rather on the back-end.  Here data engineers and IT developers are knitting together complex data platforms to move and manipulate unstructured data back into structured tables and get data prepped just so these BI tools will work. 

In recent years, we’ve seen a fast and furious evolution of data particularly in the emergence of Big Data and IoT technologies.  Not so long ago, most data used for analytics lived inside your four walls in well-understood systems and was always structured.  Data warehouses provided a single source for analytics and business analysts happily built retrospective visualizations and reports using day old data. 

Then came Big Data with a whole new world of real-time insights into customer sentiment or behavior tracking.  Now, looking at yesterday data was so, well, yesterday.  The new face of business intelligence was real-time dashboards built using data from mostly external sources and housed in multiple data stores.  As the data evolution progress further into IoT, data only gets bigger and faster and is almost always unstructured or multi-structured.

However, virtually every BI tool is stuck in the age of structured data.  Modern data analytics blends structured, unstructured and multi-structured data together to glean insights.  If your organization cannot leverage NoSQL data, then how do you integrate that data into your decision-making process?  These traditional SQL-friendly BI tools tell you they support NoSQL but there is a catch and that catch permeates everything else putting your data-driven dreams at risk. Here’s are four reasons why:

They are not NoSQL-friendly

No data lives in a silo and one of the significant barriers to fully leverage your data with traditional BI tools like Tableau, Qlik or Looker is they only understand structured data.  To use NoSQL data with any SQL-based BI tool, you are:
  1. Writing and maintaining custom extract queries using proprietary query languages provided by the NoSQL database that someone must learn
  2. Install proprietary ODBC drivers for each NoSQL database
  3. Using batch ETL processes to move unstructured NoSQL data into relational tables
  4. Doing analytics on unstructured data using different data discovery tools and then doing ETL
  5. Dumping everything into a Hadoop or a data lake to clean prep and eventually moving it to a traditional data warehouse
  6. Some mash-up of the above
In all cases, schemas must be defined, extract queries must be written and unstructured data must be shoehorned back into relational tables.  Congrats, you’ve just taking your beautiful modern NoSQL data and made it old again! 

All kidding aside, this is the main barrier to becoming data-driven using your traditional BI tool.  If you cannot natively integrate your analytics platform to your modern data stack, then you simply cannot fully leverage your enterprise data.

Ex.  MongoDB BI Connector Moves Data From MongoDB to MySQL

They are slow to adapt and lack intelligent actions

Bottom-line is the way people interact with analytics in data-driven enterprises is not a “one-size-fits-all” answer.  You have a diverse set of users with differing experience expectations and use case complexity levels which all must be managed to achieve company-wide adoption.

For example, not surprisingly, many people hate looking at data.  They don’t wake up every morning excited about looking at dashboards and drilling down to try to figure out why an application, a market, a region, a product is not performing like it did yesterday.  They don’t think trying to find the needle in the haystack is a good use of their time Instead, they want their analytics platform to tell them where the problem is, why it has happened, is it likely to happen again and, in some cases, automatically trigger what to do next.  At the same time, not everyone is looking for a needle some are just looking for the haystack.  For this group, a shareable or embeddable dashboard works great. 

Then you have the citizen business analyst who just wants to ask a question using Slack or even Siri and have the right dashboard or report appear. 

Traditional BI tools were built with the purpose of handling the “haystack” scenario but are falling behind in helping people find the “needle.”  The “needle” challenge requires more advanced analytics capabilities.  In many cases, this means predictive analytics with integrated machine learning.  Additionally, natural language processing (NLP) interfaces are emerging as alternatives to embedded dashboards to help empower non-technical users with simple analytics needs like “show me the sales for today at store 123”.

Many traditional BI tools lack the ability to adapt to these emerging modern analytics requirements and coupled with their lack of native integration to modern analytics data stacks; other solutions are acquired to manage these different use cases and user experience requirements.  As a result, instead of one analytics solution, most organizations have multiple solutions which operate in data silos to solve very specific use cases using a subset of the enterprise’s data.  The proliferation of analytics solutions is hardly a recipe for becoming a data-driven enterprise.

Not as data democratic as you think

By having to move and manipulate your unstructured data back into relational tables, an artificial wall is built between business users and all available enterprise data.  Expensive IT developers must be involved in virtually every project to integrate NoSQL data.  This adds months to projects, makes changes very expensive and increase the overall cost of data. Arguably, the real cost is the loss of understanding of the value of newer NoSQL data sources.  The business ends up so far away from the original data that it is hard for them to know what questions are possible to ask and therefore realize the value of newer data.  As a result, the questions, instead, become centered around the cost of acquiring and storing NoSQL data, and that is where many Big Data initiatives start to fail.

Not Big Data scalable

As mentioned earlier, data is only getting bigger and faster as IoT analytics hit the mainstream.  When ETL or ODBC drivers are used to move and manipulate NoSQL data to relational tables, then data limits are also added to prevent these processes from failing at volume.  Typically, record retrieval limits or aggregated data sets are employed to combat these performance issues.  A separate data discovery process is used to determine what data to retrieve or what aggregations to create.  Data Discovery, in this case, requires a different tool or custom coding and is almost always done in IT with input from the business.  From a process, technology and people perspective that is just not scalable or sustainable when it comes to leveraging Big Data to become data-driven.  There are simply too many restrictions on data and moving parts for the performance to meet business needs.

These traditional BI tools claim to reduce data silos, reduce time to insights and enable data discovery across the enterprise.  However, our customers repeatedly tell us they fail at achieving this when it comes to integrating NoSQL data into their business analytics. 

To be data-driven requires a modern analytics platform that enables data discovery across your modern data stack with support for descriptive, predictive and prescriptive analytics to derive insights and drive actions in a way that support the specific needs of a diverse end-user community and their use cases. 
________

Cloud9 Charts is modern analytics platform that has already enabled dozens of organizations, large and small, to become data-driven.  We natively integrate NoSQL and SQL data sources and enable multi-data source joins for seamless blending of structured, unstructured and multi-structured data without the need to move it.  You can share or embed over 30 different visualizations or use our advanced analytics capabilities to automate alerts and actions.


In the coming days, we have an exciting release announcement that will bring you one big step closer to achieving your goal of becoming a data-driven enterprise using a single analytics platform.  Stay tuned…