- Level Up Coding
- LUC #29: Navigating Software Updates: A Closer Look at Deployment Methods
LUC #29: Navigating Software Updates: A Closer Look at Deployment Methods
Plus, SOLID principles, webhook vs polling, and protecting against DDoS attacks.
Welcome back to another edition of Level Up Coding’s newsletter.
In today’s issue:
READ TIME: 7 MINUTES
Navigating Software Updates: A Closer Look at Deployment Methods
Nailing the right deployment pattern is key to smoothly introducing new features and updates. Think of them as the secret sauce for reducing risks, avoiding interruptions, and delivering a seamless experience to users. Among the many possible approaches, there are five that really shine for their effectiveness and adaptability. Let’s take a look at their distinct traits and why developers worldwide rely on them for smooth software launches.
This method is a game-changer for zero downtime updates. There are two environments involved: Blue and Green. One is always active, while the other stands by. When a new software version is ready for release, it's deployed to the currently inactive environment, either Blue or Green. After deployment, developers are given the opportunity to test the changes in a real-world scenario without disruption to live traffic.
Once everything is tested, all traffic is then directed to the new environment. This switch is swift, ensuring users don't notice a thing.
The downfall of Blue/Green Deployment is its complexity and cost. Managing two identical production environments can drain your resources, potentially doubling infrastructure costs.
Named after canary birds in mines, it involves rolling out changes to a small group first, keeping an eye on performance, and gathering feedback.
If the new feature does well with the initial group, it's gradually introduced to more users. If problems do pop up, disruption is kept to a minimum as it's isolated to a small set of users and developers have an opportunity to fix them or roll back the system before it’s seen by more users.
To ensure efficiency, the rollout should be kept within a reasonable timeframe, with adjustments to the size of user increments based on each phase's outcomes.
Rolling deployment is about introducing new software in stages. Instead of updating all servers or delivering to all users at once, it starts small and expands.
This keeps most of the system up and running during the update, reducing the risk of a complete shutdown.
If you're working on a system where it's crucial to keep things continuously operational, this approach is your go-to. However, it does bump up the deployment time which can be problematic for large systems. Managing incremental updates across complex applications can also be challenging, and there’s a risk of temporary inconsistency in the system during the update process.
Think of feature toggles as on-off switches for new features. They allow teams to deploy features quietly, turning them on for specific users when it makes sense.
Feature toggles let you test-drive new features with specific users before going public.
These toggles also support strategies like canary releases and A/B testing. They provide real, comparative insights for future development, and the ability to quickly switch off a problematic feature cutting down the risks when rolling out updates.
Feature toggles are a very useful and popular tool but having too many toggles can become cumbersome, increasing the risk of conflicts between features. It’s a good idea to keep feature flags short-lived where possible.
A/B testing in deployment is like a scientific experiment to guide decision-making on features and changes to an application. It involves two versions of a feature that are presented to different user groups to see which is preferred.
For example, if there's uncertainty about which design of a feature is more effective, A/B testing allows for real-time comparison. Each version is given to a separate segment of users, and their interaction with it is closely monitored. The team then uses this data to determine which version is more successful, based on specific metrics like user engagement or ease of use.
By making sure to use A/B testing where it's needed, software teams can keep tweaking their product bit by bit, ensuring it keeps up with what users want and need.
Each deployment pattern stands out for specific strengths: Blue/Green for safety and zero downtime, Canary for controlled, low-risk rollouts, Rolling for maintaining continuous operations, Feature Toggles for flexible feature management, and A/B Testing for data-driven user insights.
The right deployment pattern varies depending on the project's needs and objectives. Recognizing these differences allows teams to choose the best approach for a successful and user-centric software release.
Webhook vs Polling (Recap)
Polling is a pull-based approach that operates on a 'check-in' system. Clients regularly initiate API calls to the server to inquire about any changes. This process involves the system routinely executing API requests at set intervals to ensure updates are consistently captured and communicated, even though they may not be instantaneous.
Webhooks represent a push-based methodology, where notifications are sent from the server only when new data becomes available. This system relies on the server's initiative to send out notifications when there are updates. When this happens, the server dispatches information directly to a predefined webhook URL, with a payload containing details of the update. This mechanism allows for immediate data synchronization without the need for constant API requests.
Webhooks provide a more efficient and real-time solution, enabling immediate data synchronization as opposed to the delayed response of polling. However, they do come with the trade-off of increased complexity in setup and maintenance compared with polling.
S.O.L.I.D principles (Recap)
SOLID represents five principles of object-oriented programming. Whether or not you use OOP, knowing these principles gives you a lens into the foundations of clean code which can be applied to many areas of programming.
Single Responsibility Principle (SRP): Each unit of code should only have one job or responsibility. A unit can be a class, module, function, or component. This keeps code modular and removes the risk of tight coupling.
Open-closed Principle (OCP): Units of code should be open for extension but closed for modification. You should be able to extend functionality with additional code rather than modifying existing ones. This principle can be applied to component-based systems such as a React frontend.
Liskov Substitution Principle (LSP): You should be able to substitute objects of a base class with objects of its subclass without altering the ‘correctness’ of the program.
Interface Segregation Principle: Provide multiple interfaces with specific responsibilities rather than a small set of general-purpose interfaces. Clients shouldn’t need to know about the methods & properties that don't relate to their use case. This decreases complexity and increases code flexibility.
Dependency Inversion Principle (DIP): You should depend on abstractions, not on concrete classes. Use abstractions to decouple dependencies between different parts of the systems. Direct calls between units of code shouldn’t be done, instead interfaces or abstractions should be used.
How DDoS Attacks Work and How to Prevent Them (Recap)
Distributed Denial of Service (DDoS) attacks are a major threat to digital systems, disrupting traffic to targeted servers, services, or networks, often resulting in financial losses, reputation damage, and diminished user trust.
DDoS attacks inundate a target with traffic from numerous sources, making it difficult to pinpoint and block the bad actors. The multi-source aspect sets DDoS apart from its cousin, the Denial of Service (DoS) attack.
Given the complexity and adaptability of DDoS attacks, it becomes imperative to deploy well-planned defensive measures, such as:
Embracing Redundancy - Distributing network traffic across multiple servers, especially in varied geographical locations, makes it challenging for attackers to bring down your entire system.
Applying rate limiting - By restricting the number of requests a user can send in a given time frame, rate limiting can halt suspicious spikes in traffic.
Implementing WAFs - Use Web Application Firewalls to filter HTTP traffic and block harmful patterns.
Leveraging Cloud Solutions - Cloud providers offer built-in solutions to help mitigate DDoS attacks.
Analyzing traffic - Continuously monitor web traffic for anomalies.
That wraps up this week’s issue of Level Up Coding’s newsletter!
Join us again next week where we’ll explore understanding different database types, Monolithic vs Microservices Architecture, HTTP vs HTTPS, and how quantum computing works.