It’s been over a month since I attended the Gartner IT Symposium/Xpo in Orlando and I’ve spent that time really chewing on some of the great sessions and thought leadership presented at the show. Modern IT practices remain a moving target so plugging into the analyst machine every once in a while helps me get a bigger picture beyond even our day to day at Green House Data (which can be pretty diverse itself, with big pushes on DevOps and digital transformation while we balance our existing data center, cloud, and managed services pillars).
It was interesting hearing Gartner start to shift their message from “cloud is the only option” to “cloud is an option.” As cloud adoption strategies have matured we have seen this attitude shift as well, with more organizations looking multi-cloud while maintaining some on-prem systems. One presentation on public cloud costs compared to on-prem data centers really helped drive this home. The bottom line is that the cloud is not automatically cheaper or even necessarily more efficient depending on the application or purpose of the deployment.
Other major topics included how to find digital talent, as the management of human capital and IT teams continues to evolve alongside the industry, as well as one of my favorite presentations, “Are You Maximizing Your Security Operations Center,” which had a ton of great information around security.
With the symposium still fresh in mind, here is my list of where enterprise IT operations are heading in 2020 and beyond.
Green House Data was onsite last week at Microsoft Ignite. We had some incredible conversations at our booth about Azure, PowerApps, application modernization, DevOps, Windows Server end of support, and more. Of course, while we were working the floor, Microsoft made a bevy of product announcements around core products and services that are sure to shake up your IT world! I’m super excited about these new developments, so here are my top takeaways from the show.
It happens to everyone at some point. Your budget gets slashed; the economy tanks; you’re suddenly in the red thanks to cloud sprawl. Whatever the cause, you’ll likely face a mandatory cost cutting initiative at some point in your IT career.
While cost cutting is a reality, it is fundamentally different from ongoing cost optimization. You should be practicing cost optimization as part of your regular duties, reviewing spend and ensuring the technology, hardware, software, and services in use across your organization are serving their business need and appropriately configured in scope and performance.
By formulating and practicing a cost optimization protocol, you’ll be prepared should the day for cost cutting ever come, while also gathering evidence for the impact IT has on the overall bottom line.
If your organization is large enough to have an information security manager or an entire security team, then it’s likely that any security issue or task will be pushed in their direction. That’s why you hired them, isn’t it?
Security is a specialized area of IT and it requires specific skills for a holistic approach. It is also a moving target with many components and attack vectors across your technology stack. A dedicated security team or individual, whether in-house or contracted, can therefore be valuable. But security must be a shared responsibility among every user, no matter their role.
There’s an inherent problem here and its name is Diffusion of Responsibility. When everyone has a stake in security and there are dedicated managers to boot, users could be more likely to engage in risky behavior. After all, it’s taken care of! That’s why we hired that security guy.
It might feel like DevOps is eating the world, but there’s still room for other innovations within and adjacent to IT operations. One such example is the DataOps movement. The general inspiration behind DataOps is similar to DevOps in that is strives to provide higher quality deliverables from shorter cycles by leveraging technology and specific methodologies around it.
DataOps does not boil down to DevOps principles applied to data analytics, however. While both approaches may embrace automation, continuous improvement, and strong communication between departments, DataOps is less of an infinite cycle and more of an injection of agility into a one-way data pipeline.
Let’s explore the roles, strategies, and technologies at play in a DataOps approach to analytics.
A fundamental building block for your successful adoption of cloud services is the organizational hierarchy, a mode of organizing your cloud services, resources, and virtual machines in such a way that you ensure cloud governance and can better resolve billing within your organization.
Cloud governance is the answer to common questions like:
• “How do I keep my data compliant with industry regulations?”
• “How can I implement chargeback within my organization so I know which departments are consuming cloud services and account for that usage?”
• “How can I mandate security and access measures across our cloud environment?”
By implementing a flexible set of controls and overall organizational hierarchy within Azure, you can enable adoption of the cloud services your business units require and avoid shadow cloud use. A well-designed enterprise cloud environment can accommodate modern agile practices alongside traditional workloads.
Here’s how to structure your organizational hierarchy within Azure so you can set governance requirements and encourage speed of delivery for your individual departments and business units.
Last week I introduced key agile concepts including the history of and essential roles required for Scrum practices. I described a real-world example of how DevOps could have saved my organization major headaches and expenses.
In Part Two of this post on using agile Scrum methodology within DevOps ecosystems, we'll examine the Four Values of Agile and learn how to change your organizational mindset to accommodate this new paradigm.
In early 2001 I was involved in a software development project on integrating bolt-on application to a JD Edwards ERP software platform. The team completed the initial requirements collection and developed a comprehensive Business Requirement Document (BRD), investing roughly two to three months. The team had multiple review sessions to identify gaps in the requirements process and after a few cycles received approvals to proceed to the development phase.
While development was in progress, some of the EDI-based vendor data sources changed the mapping. This situation created chaos in the project. The management team decided to hold off the development phase. The project had to go through the requirements cycle again to identify the gaps. This situation impacted the schedule and budget, creating massive frustration in the organization. The first six months of the project investment was on hold. The sequential software development process we followed did not allow the flexibility to deliver any incremental value to the organization since the beginning of the project.
If we had followed an agile approach, this challenging outcome might have resolved in a different manner. The short intervals of development would have produced incremental value to the organization. Therefore we would have minimized the organizational concerns of not providing any value for six long months.
Here's how agile practices, Scrum, and DevOps all work together. Learn how to overcome adoption obstacles and several keys to Scrum success in this two-part blog series.
In InfoSec we continually encounter the unknown, the unfamiliar. Technology marches ever forward, application design matures, bells and whistles chime and toot. This commonly results in the InfoSec professional needing to responsibly secure technology that they don’t holistically understand. Attackers know this, for it is within those gaps in understanding that malicious activity may most readily occur and may do so without notice.
A common InfoSec response to the unfamiliar is to attempt to cover all potential angles of attack, regardless of whether they are pertinent to the technology. This is done in order to ensure that we meet both risk and governance management goals. The result of this approach is rarely better security. Rather, it typically results in unnecessarily complicated security control implementations that are neither functional (e.g., they don’t do what we want/expect them to do) nor operational (e.g., our personnel can’t adequately manage them).
How do we avoid over-complication in our security controls? We focus on the fundamentals: Preparation, Awareness, Response.
Microsoft Azure offers native serverless computing features. Two of the most crucial to master are Azure Functions and Azure Logic Apps. Each of them help enable business logic that automates your Azure workflow, but they have key differences and in fact can be used together in a complementary manner to offer flexible, powerful control over your cloud resources.
Let’s take a closer look at how each of these serverless automation platforms work within Azure and some use cases for them.