It might feel like DevOps is eating the world, but there’s still room for other innovations within and adjacent to IT operations. One such example is the DataOps movement. The general inspiration behind DataOps is similar to DevOps in that is strives to provide higher quality deliverables from shorter cycles by leveraging technology and specific methodologies around it.
DataOps does not boil down to DevOps principles applied to data analytics, however. While both approaches may embrace automation, continuous improvement, and strong communication between departments, DataOps is less of an infinite cycle and more of an injection of agility into a one-way data pipeline.
Let’s explore the roles, strategies, and technologies at play in a DataOps approach to analytics.
A fundamental building block for your successful adoption of cloud services is the organizational hierarchy, a mode of organizing your cloud services, resources, and virtual machines in such a way that you ensure cloud governance and can better resolve billing within your organization.
Cloud governance is the answer to common questions like:
• “How do I keep my data compliant with industry regulations?”
• “How can I implement chargeback within my organization so I know which departments are consuming cloud services and account for that usage?”
• “How can I mandate security and access measures across our cloud environment?”
By implementing a flexible set of controls and overall organizational hierarchy within Azure, you can enable adoption of the cloud services your business units require and avoid shadow cloud use. A well-designed enterprise cloud environment can accommodate modern agile practices alongside traditional workloads.
Here’s how to structure your organizational hierarchy within Azure so you can set governance requirements and encourage speed of delivery for your individual departments and business units.
There are two main categories of application security testing: dynamic and static. They can be thought of as testing from the outside-in and from the inside-out, respectively.
Dynamic testing is performed as an application is running and focuses on simulating how an outside attacker might access that application and associated systems. Static testing, on the other hand, examines the code itself and related documentation, often throughout the actual development process, to try and discover potential vulnerabilities before the application reaches production.
Should you use DAST or SAST for your applications? In truth it is not an either/or situation, as DAST and SAST are complementary and evolved indivually. First let's take a look at the key differences between them.
Last week I introduced key agile concepts including the history of and essential roles required for Scrum practices. I described a real-world example of how DevOps could have saved my organization major headaches and expenses.
In Part Two of this post on using agile Scrum methodology within DevOps ecosystems, we'll examine the Four Values of Agile and learn how to change your organizational mindset to accommodate this new paradigm.
In early 2001 I was involved in a software development project on integrating bolt-on application to a JD Edwards ERP software platform. The team completed the initial requirements collection and developed a comprehensive Business Requirement Document (BRD), investing roughly two to three months. The team had multiple review sessions to identify gaps in the requirements process and after a few cycles received approvals to proceed to the development phase.
While development was in progress, some of the EDI-based vendor data sources changed the mapping. This situation created chaos in the project. The management team decided to hold off the development phase. The project had to go through the requirements cycle again to identify the gaps. This situation impacted the schedule and budget, creating massive frustration in the organization. The first six months of the project investment was on hold. The sequential software development process we followed did not allow the flexibility to deliver any incremental value to the organization since the beginning of the project.
If we had followed an agile approach, this challenging outcome might have resolved in a different manner. The short intervals of development would have produced incremental value to the organization. Therefore we would have minimized the organizational concerns of not providing any value for six long months.
Here's how agile practices, Scrum, and DevOps all work together. Learn how to overcome adoption obstacles and several keys to Scrum success in this two-part blog series.
Serverless functions (often referred to as Functions as a Service or FaaS) will no doubt continue to grow in popularity and remain a cornerstone of IT services for many years to come. However, they are simply another way of building, maintaining, and delivering IT systems. With that in mind, they naturally have disadvantages or situations in which they may not be the preferred technology to use. These are due both to the nature of serverless and how it is currently implemented by cloud service providers.
Microsoft Azure offers native serverless computing features. Two of the most crucial to master are Azure Functions and Azure Logic Apps. Each of them help enable business logic that automates your Azure workflow, but they have key differences and in fact can be used together in a complementary manner to offer flexible, powerful control over your cloud resources.
Let’s take a closer look at how each of these serverless automation platforms work within Azure and some use cases for them.
Hybrid cloud management spans beyond setting up your IaaS environment. The majority of enterprises use a mix of on premises infrastructure (both legacy and newly deployed) and cloud-based resources. Often a major hurdle remains: applications that are not ready to connect to the cloud.
Enter Integration as a Service. We know, we know. Everything as a Service overload! This emerging field involves a vendor who can help architect enterprise IT apps to work across on premises and cloud environments, complete with real-time exchange of data.
How does Integration-a-a-S work and what should you expect from a cloud integration provider?
If you’ve newly set foot on the path of an InfoSec student, you will benefit from understanding this topic. If you’ve been around awhile, you’ve lived it.
There are two basic types of Information Security engagements in terms of how they are scoped. This is most applicable to managed services providers (MSPs), though it remains relevant to a practitioner supporting an internal corporate or public sector security team. For the sake of simplicity, I’m going to call them FFP and T&M. The purpose of this blog isn’t to dig deep into financial models, but rather to discuss, in a simplified manner, how they drive the delivery of work. And then, to discuss an alternative model.
With both Fixed Firm Price and Time & Materials engagements – and really any other model of InfoSec contract scope – there are some overlapping goals and realities.
DevOps — the marriage of the development and operations departments within a software organization — and Agile methodology have been mentioned alongside cloud computing for years now, and with good reason. Using Agile in the cloud is a classic pairing that goes together like peanut butter and jelly or macaroni and cheese…okay, let me go grab a snack before this simile gets me drooling.
But seriously, even if Agile and cloud technology aren’t as tasty as PB&J, they can still have you smacking your lips in satisfaction as you react to business problems with technology solutions in a much faster and more reliable manner.
Here’s why Agile software development practices work so well when you’re working with cloud infrastructure, even if you aren’t a software development company.