Webinar Replays


  • Apr 13 2017

    Cloud migration & hybrid cloud with no downtime and no disruption

    Cloud migration and hybrid cloud with no downtime and no disruption: If business-critical applications with continually changing data are really moving to the cloud, the typical lift and shift approach of copying your data onto an appliance and shipping it back to the cloud vendor to load onto their storage days later, isn’t going to work. Nor will the one-way batch replication solutions that can’t maintain consistency between on-premises and cloud storage. Join us as we discuss how to migrate to the cloud without production downtime and post-migration deploy a true hybrid cloud, elastic data center solution that turns the cloud into a real-time extension of your on-premises environment. These capabilities enable a host of use cases, including using the cloud for offsite disaster recovery with no downtime and no data loss.
  • Apr 11 2017

    Continuous Replication and Migration for Network File Systems

    Fusion® 2.10, the new major release from WANdisco, adds support for seamless data replication at petabyte scale from Network File Systems for NetApp devices to any mix of on-premises and cloud environments. NetApp devices are now able to continue processing normal operations while WANdisco Fusion® allows data to replicate in phases with guaranteed consistency and no disruption to target environments, including those of cloud storage providers. This new capability supports hybrid cloud use cases for on-demand burst-out processing for data analytics and offsite disaster recovery with no downtime and no data loss.


  • Mar 30 2017

    Building a truly hybrid cloud with Google Cloud

    Join James Malone, Google Cloud Dataproc Product Manager and Paul Scott-Murphy, WANdisco VP of Product Management, as they explain how to address the challenges of operating hybrid environments that span Google and on-premises services, showing how active data replication that guarantees consistency can work at scale. Register now to learn how to provide local speed of access to data across all environments, allowing hybrid solutions to leverage the power of Google Cloud.


  • Feb 14 2017

    ETL and big data: Building simpler data pipelines

    In the traditional world of EDW, ETL pipelines are a troublesome bottleneck when preparing data for use in the data warehouse. ETL pipelines are notoriously expensive and brittle, so as companies move to Hadoop they look forward to getting rid of the ETL infrastructure. But is it that simple? Some companies are finding that in order to move data between clusters for backup or aggregation purposes, whether on-premises or to the cloud, they are building systems that look an awful lot like ETL.