Data Mart

Cloud Data Warehousing Alternatives for ParAccel and Actian Matrix

Actian’s retreat from data warehousing forces ParAccel/Matrix owners to find a viable, supported platform for their workloads - but without the usual time or budget set aside for such a project because the announcement was a surprise and the timeline is so short (support ends April 30).

Cazena recently hosted a webinar for IT teams facing this sticky wicket featuring Paul Wolmering, Lead Solution Architecture at Cazena, and Lokesh Khosla, Principal at Magnus Data. Together they have over 30 years of experience with MPP architectures, 12 of them with ParAccel. They covered:

  • What should ParAccel/Matrix organizations consider when evaluating their options?
  • What are the pros and cons of each option? Staying on an unsupported platform vs. replacing it with another data warehouse platform, switching to Hadoop, or migrating workloads to the cloud.
  • How do cloud and on premises options stack up?
  • What are the benefits and challenges of moving data warehousing workloads to the cloud?

Lokesh reviewed case studies, such as an advanced marketing analytics company that reduced costs 80% by moving from Netezza to AWS and Hadoop. Then Paul described how Cazena’s Big Data as a Service simplifies the migration of data warehousing and big data processing to the cloud.

So while end-of-life for ParAccel/Matrix creates a short-term headache for many, fortunately cloud-based data warehousing provides a cost-effective alternative, agile enough to meet urgent timelines, and with equal or better performance.

If you missed it “live,” you can watch the webinar recording here. If offloading workloads from data warehousing platforms at the end of their life to the cloud sounds interesting to you, contact us for a free assessment to learn how best to get started.

End of The Line for Traditional Data Warehousing?

Big data, cloud and open-source technologies are revolutionizing data warehousing and traditional vendors are scrambling to adapt. Witness Actian ending support for Matrix (formerly ParAccel), Pivotal taking Greenplum database open source, and HPE spinning off Vertica and other assets. So it’s no surprise that data and analytics leaders want to explore their options when their old data warehouse platform nears the end of its life due to capacity limits, the end of vendor support, or data center consolidation.

In the face of this rapid evolution, many enterprises question the wisdom of locking in to yesteryear’s paradigm for another generation by making additional big investments in on-premises technologies like Oracle Exadata, Teradata, IBM Netezza, EMC DCA (Greenplum Database), Actian Matrix or Vertica. Migrating data warehouses to the cloud promises a more agile, cost-effective and scalable option.

However, data warehousing in the cloud is not just a matter of spinning up an Azure Data Warehouse or Redshift cluster. That’s the easy part! But just like a car is more than only an engine, an enterprise data warehouse is a complex system. Unless you’re building everything anew in the cloud from scratch, enterprises must figure out how to address security, data movement, integration with BI/analytics tools, data sources and related systems like ETL and MDM in the new context of cloud. Many are also thinking about the potential role of Hadoop and Spark, and whether those could cut costs and add capabilities. Suddenly the “simple” question of migration becomes a more complicated and strategic issue.

We built Cazena’s Big Data as a Service to seamlessly migrate data warehouses to Azure and AWS. Our service incorporates a variety of engines including MPP SQL, Hadoop, Spark and other technologies – and we’ve focused on ease of use, integration and how to stay connected with your existing data architecture.

Cazena’s platform has many built-in capabilities including on-premises to cloud data movement, security and compliance functions, intelligent provisioning and operations to quickly migrate production data warehouses to the cloud. Cazena also has the benchmark data and expertise to map your workloads to the best data and cloud technology combo to maximize price/performance. For example,

  • Moving classic BI/MPP SQL style workloads to our Data Mart as a Service, powered by Greenplum Database on Azure or AWS Redshift
  • Migrating ETL workloads from your data warehouse to our Data Lake as a Service leveraging Hadoop/Spark
  • Augmenting data warehouses with our Data Lake as a Service to support self-serve data science users utilizing R, Python, Scala, etc.

If you’d like to learn more about migrating data warehousing to the cloud, contact us for a demo or free Cazena test drive.

New Infographic: Decades of Database Innovation (and My First Database)

You always remember your first…database. You didn’t quite know what you were doing and it was a bit awkward, but you figured it out and eventually ran your first query.  Whether you built it, maintained it, used it or cursed it, I'm guessing that you have at least one memorable database in your past. Can you plot it on our new infographic below and share it?

We worked with industry analyst Robin Bloor to highlight decades of data technology milestones. This timeline visually shows why it’s a challenge for enterprises to choose which database technologies to adopt and when. With nine to 12-month implementation cycles, a wrong bet can be a costly mistake, wasting time you won’t get back.

That’s all changing with Big Data as a Service and the cloud. For example, Cazena uses multiple database engines to power our Data Lake as a Service and Data Mart as a Service solutions. Our secure cloud service has built-in data movers, making it easy to shift analytic workloads between different engines. We regularly benchmark and add new engines to the platform, ensuring our customers always get the benefits of the latest database technologies -- MPP SQL, Hadoop, Spark and whatever becomes The Next Big Thing.

While the journalist in me cringes at the buzzword I’m about to drop, it’s apt here: Big Data as a Service can “future-proof” enterprises. It’s a new way of looking at data processing, and it’s a major evolution from early database technologies, as you can see on the graphic.

My first was a Microsoft Access database, which I built when my customer spreadsheet at a 1990s startup became unwieldy. It evolved as the company grew – and as I learned more. I got advice from my dad, who worked on databases for a large insurance company. Along the way, he explained mainframes, the new columnar databases the insurer was testing and why he had to build “indexes,” so that queries ran faster. It was fascinating (a testament to my dad’s teaching skills), challenging and it was clearly important. Eventually, my Access database had to be replaced, and I learned about a whole new set of databases. Not long after that startup, I became a tech journalist, covering data and analytics, and I’ve worked in this industry ever since. Cue violins.

That was #myfirstdatabase – what was yours?

History_infographic_sm.jpg

 

Learn more about Big Data as a Service >>