Need testing support? Check our Quality Assurance services.

See also

Let’s discuss your project

“Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

Martin Fowler, Refactoring: Improving the Design of Existing Code | Source

Have questions or need support? Contact us – our experts are happy to help.


What is Software Craftsmanship and why will it be crucial in 2025?

Software Craftsmanship is an approach that treats programming as a craft requiring mastery, continuous improvement and attention to technical detail. Unlike mass-produced software, the craftsmanship approach focuses on creating code that is not only functional, but also elegant, maintainable and efficient. It is worth noting, however, that this is not a universally accepted philosophy - many critics point out that too much focus on the “beauty” of code can lead to perfectionism at the expense of delivering business value.

The importance of Software Craftsmanship in 2025 will grow unevenly across sectors. Industries with high security and reliability requirements, such as fintech and healthcare, are already seeing a growing premium on technical quality. For example, insurance company AXA, after a costly customer service system incident (36 hours of unavailability in 2023), radically changed its approach to software development, introducing rigorous quality standards and verification processes. On the other hand, in sectors focused on rapid market experimentation, such as some areas of e-commerce, an artisanal approach will still often give way to speed of implementation.

A key paradox facing programmers in 2025 is that AI tools (which Forrester Research predicts will comprise 40% of typical programming tasks by the end of 2025) will mostly take over routine tasks, leaving humans to deal with issues that require deeper design and systems thinking. It is these “craft” aspects of programming - creating elegant abstractions, designing complex architectures, anticipating the long-term consequences of technical decisions - that are most difficult to automate.

For early-career programmers, however, this paradoxically means a more difficult development path. So far, juniors have started with simple, repetitive tasks that gradually lead to building deeper expertise. A 2024 Stack Overflow survey shows that as many as 68% of developers with less than two years of experience fear that AI will hinder their ability to acquire fundamental skills. This raises an important question: how do we educate the next generation of craftsmen if the first rungs of the development ladder are replaced by automatons?

Key pillars of Software Craftsmanship 2025

  • Code quality as a strategic asset for the organization, but with a pragmatic approach to business trade-offs

  • Balancing speed of delivery with sustainability of solutions - different for different business contexts

  • Selective use of AI tools for routine tasks while maintaining human control over key design decisions

  • Adaptability and resilience of systems to unexpected changes in the business and technological environment

Why will customers demand a “craftsmanship” approach from developers in 2025?

The growing complexity of technology ecosystems is leading to a situation where the consequences of poor quality solutions can be catastrophic. A case in point is the British bank TSB, whose 2018 IT systems migration failed, generating losses in excess of £330 million and the loss of 80,000 customers. This and similar situations, such as the recent 17-hour failure of payment systems in Australia (June 2023), are creating a clear market trend: organizations that have experienced the costly consequences of poor technical quality are beginning to consider code quality as part of risk management.

However, there is a significant gap between declarations and practice. Gartner’s 2023 survey found that while 78% of IT executives say they prioritize technical quality, only 31% of companies actually allocate adequate resources to it. This dissonance is particularly apparent in organizations with strong pressure for fast quarterly results, where the long-term benefits of investing in technical quality often lose out to short-term business goals.

For experienced developers, this means developing new skills - not only technical, but also communication and business skills. Successful craftsmen must be able to “translate” the value of technical quality into language that business decision-makers can understand. Instead of abstract concepts of “clean code,” it is more effective to argue based on concrete business risks: “The current architecture increases the risk of system unavailability by 35%, which with our current traffic would mean a loss of XK per day.”

At the same time, it is worth keeping a critical eye on the Software Craftsmanship movement itself. Some organizations, especially market validation startups, may consciously choose a strategy of rapid prototyping at the expense of technical excellence. The problem arises when these makeshift solutions become the foundation of production products without the necessary refactoring. The key difference between strategic compromise and technical laxity lies in awareness of the consequences and planned management of the technical debt incurred.

For mid-level programmers (3-5 years of experience), this means developing the ability to critically assess when the pursuit of technical perfection is justified and when it is a manifestation of harmful perfectionism. This ability to balance the ideals of craftsmanship and business pragmatism will be one of the key differentiators in the 2025 job market.

What benefits will Software Craftsmanship bring to business in the coming years?

A craftsmanship approach can significantly reduce the total cost of ownership (TCO) of IT systems, but the value of this reduction is strongly dependent on the business context. In systems with long life cycles, high reliability requirements and frequent functional changes, the benefits are most significant. For example, Australia’s Commonwealth Bank, through a comprehensive upgrade of its core banking system, carried out with a focus on architecture and code quality, has reduced a

ual maintenance costs by 35% and reduced the time to make regulatory changes from months to weeks.

On the other hand, in products with a short life cycle or high market uncertainty, an investment in technical excellence may not yield the expected return. Startup Quibi, despite more than $1 billion in funding and an excellent technology team, collapsed after only six months of operation - not because of technical problems, but faulty business assumptions. This example illustrates that even the best technical execution won’t save a product that doesn’t address real market needs.

A pragmatic approach to craftsmanship therefore requires an informed categorization of systems and functionality. McKinsey, in its report “Technical Debt and Business Value” (2022), proposes a classification matrix that divides systems according to two criteria: life expectancy and rate of change. For long-lived systems with a high rate of change (e.g., core banking), investment in superior technical quality is fully justified. For short-lived systems with a low rate of change (e.g., periodic marketing campaigns), pragmatic quality trade-offs are often more cost-effective.

For developers aspiring to be tech leads, it becomes crucial to understand these business optics and be able to match technical standards to different contexts. This means moving away from the dogmatic “always the highest quality” approach to consciously choosing the right level of craftsmanship for a particular business context.

It is also worth noting that the benefits of craftsmanship are harder to measure than the costs. While the additional time spent on refactoring or writing tests is directly visible in project budgets, benefits such as avoided failures or faster deployment of future changes are counterfactual - it is difficult to prove the value of problems that did not occur. This asymmetry of measurement presents a significant challenge in justifying investments in technical quality.

The most mature organizations are addressing this challenge by implementing more sophisticated metrics that go beyond traditional productivity measures. Spotify, for example, monitors “time to recovery” (disaster recovery time) and “time to market for similar functionality” as indirect indicators of technical quality. These metrics, tied directly to business goals, are proving far more effective in communicating the value of craftsmanship than abstract measures of code quality.

Pragmatic ROI with Software Craftsmanship.

  • Reduce long-term maintenance costs of systems by 20-40% (depending on context and initial state)

  • Reduce change time by 25-35% for stabilized, well-designed systems

  • Reduce major incidents by 40-60% in critical systems

  • Increase retention of technical talent by 15-30% in teams with high quality standards

What technical skills will become the foundation of Software Craftsmanship in the AI era?

The ability to design complex system architectures will become a key competency that distinguishes craftsmen in the AI era. While tools such as GitHub’s Copilot and Amazon’s CodeWhisperer are increasingly automating implementation at the code level, designing a high-level architecture remains a domain that requires a deep understanding of technical tradeoffs, business context and the long-term consequences of decisions. As an example, the Netflix team designed its microservices architecture with a focus on resiliency and component autonomy, enabling the system to be extremely reliable even during partial infrastructure failures.

For early-career developers (1-3 years of experience), it becomes a challenge to acquire these architectural skills in a world where simple implementation tasks are increasingly automated. Unlike previous generations who gradually built architectural understanding through thousands of hours of coding, today’s juniors must find alternative learning paths. An effective approach is to deliberately study existing open source systems, participate in design sessions with more experienced team members, and work on their own projects with an emphasis on architectural awareness from the beginning.

Sophisticated testing skills are becoming a second technical pillar. Paradoxically, although AI tools can generate simple unit tests, a holistic approach to quality assurance requires a deeper understanding that goes beyond the capabilities of current AI systems. Designing an effective test strategy that takes into account different levels of testing (unit, integration, end-to-end) and the trade-offs between coverage, execution time and maintainability remains a domain that requires human judgment.

The traditional approach to testing, focused on the number of tests and formal code coverage, is evolving into one that is more focused on business risk. Netflix, for example, has moved away from a rigid requirement for high unit test coverage to a strategic mix of tests that maximizes detection of critical bugs from a user perspective. This paradigm shift requires testers and developers to have risk analysis and prioritization skills that go beyond the purely technical aspects of testing.

The third key area is a deep understanding of the internal mechanisms of the technologies used. In an era of increasing abstraction and “magic” tools, the ability to look under the hood and understand what is really going on at the lower levels of the technology stack is becoming a distinctive competency. For example, while the average React developer may rely on libraries and components without a deeper understanding of their i

er workings, a craftsman will understand the basic mechanisms of virtual DOM reconciliation, the lifecycle of components and the performance implications of various usage patterns.

For developers with 3-5 years of experience, it becomes crucial to consciously deepen their knowledge in these strategic areas, rather than superficially following the latest frameworks and libraries. A valuable strategy is to select 1-2 core technologies for deep understanding (e.g., the mechanics of databases or the internals of the chosen programming language), supplemented by a broader but shallower understanding of related technologies.

How will automation affect the artistic dimension of software development?

Automating routine programming tasks significantly changes the nature of a developer’s work, potentially enhancing rather than diminishing the artistic dimension of software development. Tools such as GitHub Copilot and DeepSeek Coder allow developers to focus on more creative aspects of their work - designing abstractions, modeling domains, creating consistent and elegant APIs. For experienced developers, this change can be liberating - a survey conducted by GitHub in 2023 indicates that 75% of developers using Copilot report higher job satisfaction and greater comfort taking on more challenging project tasks.

However, the relationship between automation and creativity is not unequivocally positive. There is a risk that over-reliance on AI suggestions leads to homogenization of code and solutions, limiting the diversity of approaches that has historically been a source of innovation in programming. A critical observation by Professor Daniel Jackson of MIT points out that AI models are by definition conservative - trained on existing code, they tend to replicate dominant patterns rather than propose groundbreaking alternatives. For craftsmen, this means the need to consciously balance between the convenience of automation and the preservation of creative autonomy.

For mid-level developers (3-5 years of experience), it becomes crucial to develop a personal strategy for using AI tools. An effective approach is to treat AI as a collaborator in the creative process - using its suggestions as a starting point for critical analysis and improvement, rather than uncritical acceptance. An example of this is the practice used by the Stripe team, where AI-generated code is always critically analyzed for elegance, performance and compatibility with the broader system architecture.

One intriguing trend is the change in the proportion of time spent on different phases of software development. While traditionally developers spent most of their time on implementation, in the era of automation more attention can be given to the design and reflection phases. The Figma team is experimenting with a process in which developers spend up to 40% of their time on the “design in code” phase - creating and iterating prototypes with a focus on the elegance of interfaces and abstractions, before proceeding to full implementation. This shifted balance emphasizes the artistic dimension of craftsmanship.

At the same time, automation raises fundamental questions about the professional identity of programmers. For many, especially those with longer tenure, proficiency in writing code has been a key part of their professional identity. The shift toward overseeing and directing AI tools can create a sense of loss of creative autonomy. A survey conducted by Stack Overflow in 2023 found that 42% of experienced developers (10+ years of experience) expressed concerns about losing control over the creative process when using AI tools heavily.

How to combine flexibility with attention to code quality in dynamic projects?

The use of modular architecture is a fundamental strategy for combining flexibility with quality, but implementing this approach brings with it a number of practical challenges. Contrary to popular belief, modularization does not automatically mean microservices - which for many organizations have proven to be a premature optimization that generates more problems than benefits. Soundcloud, after initially enthusiastically adopting microservices, did a partial consolidation back to larger, more cohesive functional units, which reduced operational complexity while retaining the key benefits of modularization.

The key challenge lies in identifying the right module boundaries, which should reflect real divisions in the business domain rather than arbitrary technical decisions. The Domain-Driven Design (DDD) technique with the concept of bounded contexts offers a useful framework for this analysis, but requires a deep understanding of the business. For mid-level developers (3-5 years of experience), developing the ability to model the domain and identify the natural boundaries of systems becomes a core competency - beyond the purely technical aspects of programming.

Automating quality assurance processes helps accelerate delivery while maintaining high standards, but building an effective CI/CD pipeline requires significant upfront investment. The challenge is determining the right level of automation for a specific project context. While organizations with high DevOps maturity, such as Google and Shopify, reap the benefits of advanced automation (thousands of deployments per day with minimal downtime), smaller teams often encounter the “over-engineering trap” - devoting a disproportionate amount of resources to building and maintaining an advanced CI/CD infrastructure whose scale exceeds their actual needs.

A pragmatic approach is to build out automation gradually, starting with the areas of highest risk and greatest return on investment. The Basecamp team uses a “just enough automation” strategy, automating business-critical path tests first and static code analysis for the most common quality issues. This selective approach delivers 80% of the benefits for 20% of the effort, typical of full automation.

An iterative approach with built-in refactoring cycles creates space for parallel functionality development and quality improvement, but requires conscious management on the part of tech leads. One of the biggest challenges is determining the right timing and scope of refactoring - too early can lead to inefficient use of resources, too late can perpetuate problematic patterns in the code. A practical heuristic used by teams like Square is the “three-use rule” - when a similar pattern or functionality appears a third time, it’s worth investing in refactoring and creating a reusable, high-quality component.

The most sophisticated teams are evolving toward “seamless refactoring” - Instead of scheduling dedicated refactoring sprints (which often fall victim to business pressures), they are integrating incremental improvements in code quality as an integral part of daily work. However, this philosophy, implemented by Atlassian’s teams, among others, requires a high level of technical discipline and an organizational culture that actually values and supports this approach at all levels.

The most common pitfalls in balancing quality and speed

  • Treating quality and speed as opposing goals instead of complementary aspects of effective delivery

  • Choosing trendy architectures (e.g., microservices) without analyzing whether they really fit the needs and maturity of the organization

  • Building overly complex CI/CD processes disproportionate to the scale of the project and team

  • Postponing refactoring “for later” without a concrete plan to carry it out

  • Lack of education of business stakeholders about the importance and value of investing in technical quality

Why will security and compliance be inseparable from craftsmanship in 2025?

Integrating security into the software development process (Security by Design) is ceasing to be an optional extra and is becoming a fundamental legal and business requirement. European regulations such as NIS2 and the Cyber Resilience Act, which take effect between 2024 and 2025, impose stringent Security by Design requirements for a wide range of digital products. For organizations, this means a fundamental shift in their approach to security - from reactive (detecting and patching vulnerabilities) to proactive (designing systems with security in mind from the start).

Putting these principles into practice remains a challenge, especially in organizations with well-established software development practices. The traditional model, in which a security team conducts an audit prior to deployment, becomes inefficient in the face of continuous delivery and rapid release cycles. Banque de France, despite initial internal resistance, redefined the process by integrating security experts directly into development teams and implementing automated security scans as part of the daily compilation process. This transformation initially slowed teams down, but after six months resulted in a 70% reduction in critical vulnerabilities detected in late phases and an acceleration of the deployment cycle.

For early and mid-career programmers, this means developing specific competencies in secure coding. It’s no longer enough to delegate security responsibilities to dedicated experts - a basic knowledge of the OWASP Top 10, secure data management principles or defensive programming techniques is becoming part of every programmer’s core skill set, regardless of specialization.

Compliance with rapidly changing regulations requires a new approach to system design. Global companies like Siemens and SAP are implementing an architecture referred to as “compliance-aware design,” which is based on three pillars: (1) abstraction of compliance-related logic into dedicated components, (2) built-in auditing and reporting mechanisms, and (3) parameterization of jurisdiction-dependent behavior. This approach, while initially involving a greater investment in the design phase, drastically reduces the cost of compliance with new regulations, which in traditional architectures often require costly and risky rebuilds.

The design of AI-based systems, which are subject to new, specific regulations such as the EU AI Act, is becoming a particularly challenging area. These regulations introduce the categorization of AI systems by risk level, with corresponding requirements for transparency, human oversight, documentation and testing. For many organizations, this represents a whole new dimension of compliance beyond the traditional security and data protection framework.

Professionals aspiring to become architects and tech leads should develop competencies at the intersection of technology, law and risk management. Familiarity with the major regulatory frameworks (GDPR, NIS2, AI Act in Europe; HIPAA, CCPA in the US) and the ability to design systems that address their requirements is becoming a valued specialty. At the same time, it’s worth keeping in mind that regulatory compliance alone doesn’t guarantee security - some organizations fall into the “compliance theater” trap, treating compliance as a tick-box exercise instead of actually addressing risks.

Transparency and auditability are becoming inherent attributes of socially responsible systems, going beyond formal regulatory requirements. For systems that make decisions that affect people (credit assessment, recruitment, medical diagnosis), the ability to explain the decision-making process becomes a key ethical and practical requirement. Implementing these features requires an informed design - from data structures to process flows to reporting mechanisms - that enables reconstruction and verification of every significant system decision.

What work methodologies (e.g. Agile, DevOps) will dominate the software craftsman community?

Hybrid methodological approaches are gaining traction in the craftsman community, but their effective implementation requires a deeper understanding that goes beyond a superficial knowledge of terminology and ceremony. Instead of rigorously adhering to a single framework, mature organizations are creating their own customized methodologies, borrowing elements from a variety of sources. Spotify, despite the widespread imitation of their “Squads-Tribes-Chapters-Guilds” model, is constantly evolving its practices, adapting them to changing needs and contexts - including moving away from some elements of the original model that proved suboptimal in practice.

The key challenge lies in selectively adapting practices that are appropriate for a particular context, rather than uncritically adopting entire methodologies. For example, while Scrum ceremonies may work well for product teams, they may not be effective for platform or infrastructure teams, where work is more continuous, less iterative in nature. Digital Ocean effectively combines elements of Kanban for platform teams with a more scrum-like approach for product teams, maintaining consistency in values and principles with flexibility in specific practices.

It can be a pitfall for early-career developers to focus on the external aspects of methodologies - ceremonies, roles, artifacts - without understanding their deeper purpose and principles. A valuable approach is to study foundational works in agile software development (such as the original Agile Manifesto or books by Kent Beck, Martin Fowler) that explore deeper principles and values, not just specific implementation practices.

Strengthening DevOps practices with a focus on system observability is becoming a key trend, but its implementation varies significantly by scale and context. While giants like Google and Amazon are building advanced, in-house DevOps platforms, smaller organizations often face a “DevOps Tax” - a disproportionate investment in building and maintaining advanced automation and monitoring infrastructure. HashiCorp proposes a pragmatic, incremental approach, where organizations start with the basics (CI/CD, monitoring basic metrics), adding more advanced features (tracing, chaos engineering) as these investments mature and prove their business value.

A particular challenge for many organizations is the transition from “doing DevOps” (implementing tools and practices) to “being DevOps” (cultural transformation toward shared responsibility for the entire software development lifecycle). Akamai, despite significant investments in DevOps tools, initially experienced limited benefits due to persistent organizational silos. The breakthrough came only after a deeper cultural transformation, including changes in team structure, performance metrics and overall process accountability.

Methodologies that support continuous team learning are gaining traction, but their formalization often leads to paradoxical results. Some organizations, in an effort to standardize and scale learning processes, create complex frameworks and ceremonies that paradoxically inhibit organic, contextual learning. GitHub takes an alternative approach, creating a “learning infrastructure” - spaces, tools and incentives to support spontaneous knowledge sharing - rather than imposing top-down defined processes. This flexible structure includes both synchronous forms (pair programming, internal lighting talks) and asynchronous forms (internal wikis, recorded demos) to accommodate different learning styles and time constraints.

For developers aspiring to leadership roles, it becomes critical to develop the ability to consciously shape a team culture that supports continuous improvement. This goes beyond knowledge of specific methodologies or tools to include a deeper understanding of team psychology, group dynamics and facilitation techniques for effective collaboration. These “soft” aspects of technical leadership often have a greater impact on team quality and effectiveness than specific methodological or technological choices.

How to develop soft skills to become a desirable specialist of the future?

Effective technical communication is becoming a critical skill in an environment where technical decisions have an increasing business and social impact. It goes far beyond the ability to express oneself clearly - it includes tailoring the level of detail and language to the audience, actively listening to stakeholder needs, and the ability to transform complex technical concepts into narratives that non-technical audiences can understand. Contrary to popular stereotypes, these skills are not i

ate or reserved for “social people” - they are specific competencies that can be methodically developed.

For early-career developers (1-3 years of experience), it’s crucial to understand that technical communication is a multidimensional skill, covering a variety of contexts: from code documentation and code review, to team communication, to interactions with business stakeholders. Each of these contexts requires slightly different approaches and techniques. An effective development strategy is to consciously practice these different forms of communication, starting with the most technical (e.g., giving internal technical presentations to the team) and gradually expanding to communication with non-technical audiences.

A specific technique used successfully by teams such as Atlassian is the “elevator pitch exercise” - regular sessions in which developers practice explaining technical concepts in three levels of detail: to a teammate (full technical detail), to a product manager (focus on functional and business implications), and to a non-technical director (focus on business value and strategic importance). This regular practice builds “communication muscle” and flexibility to adapt the message.

Emotional intelligence and the ability to collaborate in diverse teams are not soft add-ons, but fundamental competencies in a complex software development environment. McKinsey research indicates that teams with high levels of diversity (both demographic and cognitive) achieve 35% better business results, but only if they collaborate effectively, which requires high emotional intelligence from all members.

For mid-level programmers (3-5 years of experience), developing emotional intelligence often involves breaking deeply ingrained communication habits. The technique of “conscious pausing” - consciously inserting short pauses before responding in emotionally charged situations - helps move from a reactive to a reflective mode of communication. This prevents technical conflicts from escalating into personal ones and enables a deeper understanding of the other party’s perspective.

The ability to think strategically and take a business perspective is the third pillar of soft skills. Programmers who can go beyond the purely technical aspects of a problem and understand the broader business context become much more valuable team members. A practical approach to developing this skill includes active participation in business meetings, self-study of the fundamentals of the business domain for which the software is being developed, and regular discussions with product managers and business stakeholders.

The challenge remains for many organizations to create an environment that actually values and supports the development of these soft skills. Traditional employee evaluation systems in IT often focus on technical results and performance, overlooking critical contributions in the areas of collaboration, communication or mentoring. Designing evaluation and development systems that adequately recognize and reward these aspects of work remains a significant challenge for HR and technical leaders.

For advanced developers aspiring to leadership roles, it becomes crucial to consciously build their brand as a technical leader who combines deep technical knowledge with refined soft skills. Public speaking, publications or active participation in communities of practice allow you to showcase these skills to a wider audience and build a reputation beyond the standard “technical expert” image.

Practical techniques for developing key soft skills

  • **Technical communication **: regularly practice explaining the same concepts at different levels of detail; actively ask for feedback on clarity of communication

  • Emotional intelligence: Practice “conscious pause” before reacting in emotionally charged situations; regular journal of reflection on team interactions

  • Business thinking: Actively study product business metrics; attend product and business meetings

  • **Cross-functional collaboration **: initiate collaboration with representatives of other departments (design, marketing); participate in rainmaking workshops with mixed teams

  • Conflict management: Practice the “steelman” technique - Presenting the other side’s arguments in the strongest possible form before responding

How will technology ethics affect the daily work of programmers?

Responsibility for the social consequences of the solutions created ceases to be an abstract postulate and becomes a concrete legal and market requirement. Regulations such as the EU AI Act introduce categorization of systems according to risk level, with specific requirements for high-risk systems (e.g., in recruitment, credit assessment, medical diagnostics). For developers, this means the need to consider ethical aspects already at the design stage - not as an optional extra, but as an integral part of the development process.

The challenge remains to operationalize these principles in everyday programming practice. Unlike typical functional requirements, ethical aspects are often more difficult to clearly define and test. Some organizations are addressing this challenge by introducing formal ethics assessment processes, such as the “Ethics Impact Assessment” used by Microsoft, which is analogous to the well-known security or privacy assessments. This structured process helps turn abstract ethical principles into concrete questions and design criteria that can be systematically addressed by technical teams.

For early-career developers, ethical awareness is often limited to the obvious, glaring cases (e.g., autonomous weapons systems), overlooking more subtle issues present in day-to-day work. For example, seemingly i

ocuous design decisions in social networking or e-commerce applications (notification mechanisms, recommendation algorithms) can have a significant impact on user welfare, potentially promoting addictive usage patterns or reinforcing biases. Developing an “ethical radar” - the ability to identify the potential ethical consequences of everyday technical decisions - is becoming an essential professional skill.

Algorithmic transparency and explainability of information systems’ decisions are becoming key requirements, especially for systems that have a significant impact on people’s lives. The traditional approach to machine learning, focused mainly on maximizing the accuracy of models, is evolving toward “Explainable AI” (XAI), where the ability to explain the decision-making process is as important as the accuracy itself. In practice, this often means choosing simpler, more interpretable models (like decision trees) over black boxes (like deep neural networks) for applications where transparency is key.

For teams working with AI technologies, a practical approach is to integrate “explainability” as an explicit, measurable requirement early in the development process. Instead of treating it as an add-on implemented after the fact, Allianz teams have taken an approach in which each ML model must meet specific explainability criteria appropriate to its application and level of risk. These criteria are systematically reviewed and tested, as are the functional aspects of the system.

Privacy and data minimization are becoming fundamental principles of ethical design, beyond formal regulatory requirements. In the era of big data, there is a strong temptation to collect as much data as possible “just in case,” leading to increased risk of privacy breaches and loss of trust. The ProtonMail or Signal teams embrace a “privacy by design” philosophy, where any proposal to collect user data must pass a rigorous necessity assessment - with the default assumption that data should not be collected unless there is a specific, well-founded use.

For intermediate and advanced developers, developing a deeper understanding of technology ethics goes beyond following specific guidelines - it requires fundamental reflection on the broader impact of technology on society. Organizations such as the Center for Humane Technology offer a valuable conceptual framework and tools for systematically assessing the potential consequences of technological solutions. This deeper ethical perspective allows us to see and address not only the direct, intended effects of technology, but also the indirect and unintended consequences that often have the greatest social impact.

Why will continuous learning be key to staying competitive?

Accelerating technological evolution is making the traditional approach to learning - getting an education and then gradual further training - fundamentally inadequate. According to the World Economic Forum’s 2023 report, more than 50% of key professional skills will change significantly over the next 5 years, with the highest rate of change precisely in the IT industry. For developers, this means the need to move from periodic knowledge updates to a continuous, systematic learning process integrated into daily professional practice.

The challenge is to effectively navigate the ocean of available technologies and educational materials. The paradox of choice - decision paralysis resulting from an overabundance of options - afflicts many developers who do not know where to focus their limited resources of attention and time. An effective antidote is a conscious, strategic approach to professional development, based on regular analysis of the market, one’s interests and long-term career goals.

For early-career developers (0-3 years of experience), it is crucial to balance learning the fundamentals with exposure to current technologies. The popular “shiny object syndrome” approach - constantly jumping between the latest frameworks and tools - often leads to superficial knowledge without a solid foundation. A more effective strategy is to focus on fundamental concepts (data structures, algorithms, design patterns, architecture) with parallel practice in selected current technologies that allow you to put these foundations into practice.

The “learning stack” technique - consciously building a coherent, complementary set of technologies instead of a random collection of unrelated tools - maximizes synergies between different learning areas. For example, a developer specializing in the JavaScript ecosystem might build a stack including TypeScript (for static typing), React (for frontend), Node.js (for backend), GraphQL (for APIs) and Cypress (for testing), creating a coherent, mutually reinforcing whole instead of a scattered collection of unrelated technologies.

For mid-level developers (3-7 years of experience), the challenge is to move from learning tools to a deeper understanding of principles and patterns. At this stage, it becomes valuable to study software engineering “classics” - books such as Robert Martin’s Clean Code, Martin Fowler’s Refactoring and Eric Evans’ Domain-Driven Design - which present fundamental principles that transcend specific technologies. This investment in timeless knowledge builds a solid foundation for faster evaluation and adoption of new tools in the future.

The “T-shaped skills” strategy - deep specialization in one area combined with a broader understanding of related technologies - remains an effective model, but requires conscious evolution over time. Rather than sticking to one static specialization for their entire career, adaptive craftsmen regularly recalibrate their competency profile, shifting their area of specialization in response to technological and market changes. This fluid specialization requires regular monitoring of trends and a willingness to invest in new areas, even if it means a temporary reduction in the level of expertise.

For advanced programmers (7+ years of experience), systematic exploration of related and seemingly distant disciplines becomes a valuable supplement to technical learning. Cognitive science, complex systems theory, philosophy of science or even visual arts can provide new perspectives and mental models that translate into innovative approaches to IT problems. This interdisciplinary development not only increases professional value, but also counters the burnout and rut that often afflict experienced professionals.

What niche technologies are worth watching to remain a leader in craftsmanship?

Advanced formal verification technologies are gaining importance in the context of increasing requirements for software reliability and security. Unlike traditional testing, which can prove the presence of errors, formal verification mathematically proves their absence. While historically these methods have been used mainly in critical systems (aviation, nuclear infrastructure), we are now seeing their gradual expansion to more mainstream applications.

A practical example is Infer, a static analysis tool developed by Facebook (Meta) that detects memory errors and data races in C/C++/Objective-C/Java code. Unlike traditional linter, Infer uses separation logic and bi-abduction to perform formal proofs on code fragments. Importantly, the tool is integrated into the daily workflow of developers (as part of CI/CD), demonstrating that formal methods can be practically applied on an industrial scale.

For intermediate programmers, it is worthwhile to start exploring these technologies with more accessible tools, such as TLA+ (temporal logic of actions), a formal specification language created by Leslie Lamport. TLA+ allows modeling and verification of complex concurrent and distributed systems, detecting subtle logic errors at the design stage, before implementation. Amazon uses TLA+ to verify critical AWS components, saving millions of dollars in error and refactoring costs.

Quantum computing, although still in the early stages of practical application, is introducing a fundamentally new computing paradigm that could revolutionize areas such as cryptography, combinatorial optimization and molecular simulation. IBM, Google, Microsoft and a number of startups are investing billions of dollars in the development of quantum infrastructure, suggesting that the technology will reach practical utility faster than was predicted a few years ago.

For most craftsmen, working directly with quantum computers will remain out of reach in the coming years, but understanding the fundamental concepts (qubits, superposition, quantum entanglement) and basic quantum algorithms (Shor, Grover) becomes important for two reasons: (1) preparing for the post-quantum era in cryptography, where current asymmetric algorithms (RSA, ECC) will become vulnerable to attacks; (2) recognizing problems that may gain quantum breakthroughs, allowing for strategic positioning of projects and products.

NIST (National Institute of Standards and Technology) is already in the process of standardizing cryptographic algorithms resistant to quantum attacks, and organizations such as Cloudflare and Google are experimenting with their implementation. For developers working with systems with long life expectancies, awareness of the impending cryptographic transition and migration planning is becoming an important aspect of responsible craftsmanship.

Low-code and no-code technologies are evolving from niche prototyping tools into full-fledged developer platforms for specific domains. Gartner predicts that by 2025, 70% of new business applications will be developed using these technologies. For traditional developers, this transformation may seem like a threat, but a closer look suggests a change in the nature of work rather than its elimination.

Instead of seeing low-code platforms as competition, craftsmen can view them as tools to increase productivity and expand capabilities. Spotify uses an in-house low-code platform to create dashboards and simple internal tools, freeing up developers for more complex tasks. At the same time, these platforms create new opportunities for developers in areas such as (1) extending platform capabilities by creating custom components; (2) integrating with existing systems; (3) implementing complex business logic beyond the capabilities of graphical interfaces.

Natural language processing (NLP) and large language models (LLM) technologies are fundamentally changing human-computer interaction, creating a new layer of abstraction between user intent and code. Programmers who can effectively design, train and fine-tune language models for specific applications have a unique opportunity to shape this transformation. GitHub Copilot and similar tools are just the beginning of a trend toward increasingly sophisticated “AI-pair programmers” and developer assistants.

A valuable approach for developers is to experiment with these technologies - not only as end users, but also as developers of extensions and integrations. Understanding the mechanisms of LLM (tokenization, prompt engineering, fine-tuning) and their limitations allows you to use these tools more effectively and identify niches where they can bring the most value.

**Strategic development areas for future Software Craftsme **

  • Theoretical foundations of computer science that will survive technological change

  • Distributed systems architecture and complexity management

  • Design methodologies and business domain modeling techniques

  • Advanced software testing and verification methods

  • Fundamentals of artificial intelligence and machine learning

  • Communication skills and collaboration in cross-functional teams

  • Technology ethics and awareness of the social impact of software

  • Security and privacy in system desig

  • Sustainability and energy efficiency code

  • Adaptive learning and personal knowledge management

How do you measure and communicate the value of craftsmanship to business customers?

Quantifying the long-term benefits of high quality code is a fundamental challenge in communicating the value of craftsmanship. Unlike traditional project metrics (time, budget, scope), technical quality is more difficult to measure and often has benefits that are spread out over time. ThoughtWorks introduced the concept of the “cost of change curve,” a tool that visualizes how the cost of changes to a system grows exponentially over time for low-quality code, while remaining relatively flat for well-designed code. This visualization, backed by concrete data from previous projects, proved more convincing to business decision-makers than abstract discussions of “clean code.”

For mid-level developers (3-7 years of experience), an effective strategy is to build a personal “library of business cases” - a collection of documented examples where technical quality has directly translated into tangible business benefits. These case studies, backed by concrete numbers and results, become a powerful tool in discussions with business stakeholders. For example, the Shopify team documents for each major refactoring project: (1) initial status and issues; (2) resources invested; (3) measurable results at 3, 6 and 12 months. This systematic documentation creates an evidence base of the value of the investment in quality.

A key challenge in business communication remains translating technical concepts into the language of business benefits. Instead of talking about “technical debt,” which remains an abstract concept for many managers, more effective craftsmen use business analogies and concrete consequences. Comparing technical debt to financial debt, with interest paid in the form of increased time to implement changes and higher risk of failure, proves much more comprehensible to non-technical decision makers.

The strategic approach is also to reframe the discussion from the binary opposition of “fast but low quality vs. slow but high quality” to a continuum of possible choices with different risk-benefit profiles. Rather than fighting for “perfect code” in every case, the product team at Atlassian has worked with the business to develop differentiated quality standards for different system components - from critical (where quality is a priority) to experimental (where speed of validation of business hypotheses is more important). This nuanced perspective, recognizing different business contexts, builds trust and partnerships instead of antagonistic clashes between “quality” and “deadlines.”

It becomes crucial for developers aspiring to leadership roles to develop “business bilingualism” - the ability to switch seamlessly between technical and business ways of thinking and communicating. A concrete practice to support the development of this competency is to regularly attend business meetings (sales, marketing, strategy) with no direct connection to IT, and to actively study the financial and business aspects of the organization. This bilingualism makes it possible to identify and articulate points where the technical aspects of craftsmanship directly support business goals.

Successful craftsmen are gradually evolving from a reactive defense of technical quality to a proactive presentation of it as a strategic competitive advantage. One example is the approach of Netflix, which actively promotes its culture of engineering excellence as part of building customer trust and market advantage. Rather than treating technical quality as an internal matter for IT departments, the organization is positioning it as an integral part of the customer value proposition - reliability, speed of innovation and adaptability to changing user needs are directly linked to the quality of the underlying code.

Educating customers to recognize the warning signs of poor technical quality can be just as important as presenting the benefits of craftsmanship. Boiling Frog syndrome - gradual habituation to a deteriorating situation - afflicts many organizations that do not notice growing technical problems until they reach a critical point. Craftsmen can play a key role in creating awareness of these “red flags” - such as the steadily increasing time to implement change, the growing number of regressions, or the difficulty of retaining a team. This educational role goes beyond the technical aspects of programming into the realm of business consulting.

Effective strategies for communicating the value of craftsmanship

  • Using the “cost of change curve” model to visualize the long-term costs of poor quality

  • Documenting specific business cases with measurable results of a qualitative approach

  • Translation of technical concepts into the language of business benefits and risks

  • Differentiate quality standards according to the business criticality of components

  • Educate customers on how to recognize warning signs of growing technical problems

  • Positioning technical quality as a strategic competitive advantage rather than an internal IT matter

Is it specialization or broad competence - which will win in the labor market in 2025?

The T-shaped skills model - deep specialization in a selected area combined with a broad understanding of related technologies - is evolving into a comb-shaped profile, where professionals develop several areas of deeper knowledge combined with a basic understanding of the broader ecosystem. This transformation reflects the increasing complexity and interdisciplinarity of today’s information systems, where effective solutions often require combining knowledge from different domains.

An example of such a competency profile is the architect at Shopify, who combines deep specialization in: (1) distributed systems architecture; (2) e-commerce domain modeling; (3) web application security - with a broader understanding of front-ends, data analytics and cloud infrastructure. This combination enables him to effectively design systems at the intersection of these domains, identify non-obvious trade-offs, and communicate with various specialists. Importantly, these areas of deeper knowledge were not developed simultaneously - they evolved gradually in response to changing organizational needs and personal interests.

The challenge for many professionals remains finding a balance between strategically deepening their knowledge and reactively adapting to short-term market trends. LinkedIn Learning Report 2023 indicates that as many as 65% of developers feel pressure to continually expand their scope of competence, often at the expense of deepening it. This pressure can lead to “imposter syndrome” and paradoxically reduce the actual market value that comes from a unique combination of deeper knowledge rather than superficial knowledge of multiple technologies.

For developers in the early stages of their careers (0-3 years of experience), the priority should be to build the first “tooth” in the comb profile - an area of deeper specialization that will become the basis of professional identity and the starting point for future evolution. The choice of this initial specialization should take into account three factors: (1) personal interests and aptitude; (2) current market demand; (3) long-term prospects for the technology/domain in question. For example, specialization in backend development using languages with strong static typing (Java, C#, TypeScript) offers a good combination of current demand and long-term value.

In the context of a rapidly changing technology landscape, the ability to “pivot” - to strategically change the direction of specialization in response to market and technology trends - becomes crucial. As an example, a programmer initially specializing in PHP/WordPress, recognizing the long-term limitations of this path, systematically shifted his specialization toward JavaScript/React, taking advantage of the partial overlap between these technologies (webdev) for a seamless transition. This adaptability requires constant monitoring of trends and a willingness to strategically reinvest in new areas, even as the current specialization continues to pay dividends.

Adding domain expertise to a technical profile significantly increases the market value of developers, especially at the intermediate and advanced levels. Knowing the specifics of an industry (fintech, healthcare, e-commerce) and having a deep understanding of the business problems that software solves allows them to create solutions with higher business value and communicate more effectively with non-technical stakeholders. According to a McKinsey report, IT professionals with strong domain expertise generate 35% more business value on average than those with comparable technical skills but no business context.

For advanced programmers (7+ years of experience), a valuable strategy is to consciously develop a unique “constellation of competencies” - a combination of technical, domain and soft skills that sets them apart in the marketplace and creates hard-to-replace value. Instead of competing in crowd-sourced niches of popular technologies, experienced craftsmen can identify and occupy interdisciplinary spaces at the intersection of different areas - such as specializing in the implementation of regulatory-compliant systems in the financial sector, combining technical, legal and business competencies.

How to prepare for the challenges of edge computing and distributed systems?

Edge-first design requires a fundamental change in architectural approach - from a model of centralization and unlimited resources to one that takes into account edge constraints: unstable connectivity, limited computing power and memory, autonomous operations. Tesco learned a painful lesson in its store systems modernization project when the initially designed system assumed a fixed connection between cash registers and headquarters - leading to paralysis of operations with connectivity problems. The redesigned system adopts a resilient edge architecture, where terminals can operate autonomously indefinitely, synchronizing data when connectivity is available.

A key technical challenge is data management in edge systems. Traditional database approaches, based on strong consistency and central management, fail in a distributed environment. Techniques such as CRDT (Conflict-free Replicated Data Types), event sourcing and causally consistent replication allow building systems that are resilient to connectivity issues and data conflicts. An example of a practical application is the M-Pesa mobile payment system, which enables financial transactions even in regions with limited telecommunications infrastructure thanks to advanced replication and conflict resolution mechanisms.

For developers at an intermediate level of sophistication (3-7 years of experience), a valuable first step in exploring distributed systems is to understand the theoretical foundations and limitations - such as CAP theorem (Consistency, Availability and Partition tolerance caot be provided simultaneously) or FLP impossibility (in an asynchronous distributed system, consensus caot be reached if even one process can fail). These fundamental limitations explain why the design of distributed systems requires conscious compromises rather than the search for perfect solutions.

A concrete strategy for building competence in this area is “edge-ification” of existing projects - refactoring centrally designed applications with distributed scenarios in mind. This process often reveals hidden assumptions about resource availability and connectivity that can become points of failure in real-world deployments. The Salesforce team uses a technique called “Chaos Engineering for edge” - systematically introducing connectivity issues, delays and component failures in a test environment to identify and address system weaknesses.

Security in the context of edge computing presents a multidimensional challenge - edge devices often operate in physically accessible, unsecured locations, which introduces additional attack vectors. Traditional approaches based on central control and trusted enterprise networks fail in this context. Zero-trust architecture - where every interaction requires full authentication and authorization, regardless of the source - is becoming the standard in edge system design. This philosophy is used by JP Morgan Chase in its distributed banking system, where every device and transaction is treated as potentially untrusted, regardless of physical location.

For advanced programmers (7+ years of experience), it becomes valuable to explore advanced programming paradigms specifically tailored for distributed environments. Languages and frameworks like Elixir/Erlang (with its actor model and built-in fault tolerance), Rust (with its advanced type system for memory safety) or CRDTs implementations like Yjs (for concurrent data editing) offer powerful tools to address the challenges of edge computing. WhatsApp uses Erlang precisely because of its built-in fault tolerance and efficient management of multiple concurrent connections, allowing it to handle billions of messages a day with minimal infrastructure.

A significant organizational challenge is preparing development teams to work effectively with distributed systems. Traditional development practices, based on a local development environment and deterministic unit testing, are proving insufficient for systems whose emergent behavior results from the interaction of multiple distributed components. Organizations such as Netflix have developed specialized practices ranging from a “distributed development environment” (where developers work with miniature but realistic versions of distributed infrastructure), to “chaos engineering” (systematic introduction of failures for resiliency testing), to “systems thinking training” (systems thinking training for developers).

Why will sustainability and green coding be elements of craftsmanship?

Code energy efficiency is becoming a measurable aspect of software quality not only for environmental reasons, but also for economic and practical reasons. Data centers currently account for about 1% of global electricity consumption, with projections rising to 3-8% by 2030. This trend, combined with rising energy costs and regulatory restrictions on CO2 emissions, creates a direct link between code efficiency and operational and compliance costs.

The challenge for many organizations remains quantifying and communicating these relationships. Google has introduced the concept of “software carbon intensity” (SCI), a metric that links energy consumption, resource efficiency and the cleanliness of the energy sources powering the infrastructure. Their internal tools allow developers to directly measure the impact of code changes on SCI, translating the abstract idea of green coding into concrete, measurable results. This methodology is gradually penetrating the broader community - the Green Software Foundation, for example, is working to standardize similar metrics for the industry.

For early and mid-level developers, developing energy awareness and resource optimization skills is becoming a valuable competency. Specific techniques, such as auditing the energy efficiency of code, optimizing algorithms to minimize CPU/GPU operations, managing memory efficiently or minimizing network transfers, have a direct impact on an application’s carbon footprint. Etsy conducted a comprehensive audit of its e-commerce platform, identifying that inefficient image management and redundant API calls were responsible for more than 45% of u

ecessary energy consumption. Optimizing these elements not only reduced the carbon footprint, but also improved the user experience by loading pages faster.

The challenge accompanying green coding is finding a balance between energy efficiency and other aspects of quality, such as code readability, maintainability and scalability. In some cases, energy optimizations can lead to more complex, more difficult to maintain code. A mature approach to green coding involves consciously analyzing these trade-offs and strategically selecting areas of optimization - focusing on those that offer the highest ratio of energy reduction to increased complexity.

Extending the life cycle of systems by creating adaptable software is the second pillar of a sustainable approach. Frequent replacement of systems generates not only electronic waste, but also significant emissions associated with the creation of new software and hardware. Techniques such as modularization, variability abstraction and event-driven architecture allow systems to evolve gradually instead of being replaced entirely. IKEA, after a costly and risky total replacement of its e-commerce system in 2015, adopted an evolutionary architecture strategy for subsequent iterations - building a system that can be updated component by component, without the need for a complete rewrite every few years.

For advanced developers and architects, a valuable perspective is a systems approach to sustainability that considers the full lifecycle of a digital product - from design, development and deployment, to maintenance and end-of-life. This holistic perspective requires interdisciplinary knowledge beyond traditional programming competencies - combining an understanding of IT infrastructure, energy management, environmental regulation and social responsibility.

Standardization and certification in the area of green coding is gradually gaining ground. ISO 14001 (Environmental Management Systems) is also increasingly being applied to software development processes, and new standards such as ISO/IEC 23001(Green IT) are under development. For organizations operating in Europe, the upcoming CSRD (Corporate Sustainability Reporting Directive) will make it mandatory to report on the environmental impact of operations, including emissions associated with digital products and services. This regulatory evolution is creating demand for developers who understand the technical aspects of software sustainability.

The challenge for many organizations remains balancing short-term business goals with long-term sustainability. Particularly in sectors with high competitive pressures and rapid release cycles, investing in energy efficiency or designing for longevity can be seen as a luxury. Some organizations are addressing this dilemma by integrating aspects of sustainability into existing processes and metrics - for example, adding an “energy efficiency score” as a standard element of code review or incorporating energy metrics into production monitoring dashboards. This integration normalizes sustainable practices as a quality aspect rather than an optional extra.

How do you build a portfolio of projects that will prove your mastery in 2025?

Open source projects with tangible social impact are a powerful part of a craftsman’s distinctive portfolio, but the key to their effectiveness is actual usability and adoption, not just technical features. Rather than creating yet another implementation of a blog or task management app, a valuable approach is to identify real, unmet needs in communities and nonprofit organizations. One example is Code for America, an organization that connects developers with public institutions and community organizations that need technology solutions but don’t have the resources to develop them commercially.

For early-career developers (0-3 years of experience), the challenge is to find projects that balance technical achievability with actual value. An effective strategy is to contribute to existing open source projects with an established user base, rather than starting your own projects from scratch. This method offers the dual benefit of (1) working on real, complex problems in an existing code base - a valuable skill in a professional context; (2) visibility and references from the community, which carry more weight with potential employers than personal projects without users.

Documentation of the thought process and architectural decisions is becoming as important as the code itself, especially for developers aspiring to senior and leadership roles. GitHub, GitLab and similar platforms enable extensive documentation directly in repositories - from Architecture Decision Records, to detailed explanations of key components, to analyses of trade-offs and alternative approaches. This transparency of thought process demonstrates not only the end result, but also the engineering maturity behind the decisions.

For mid-level developers (3-7 years of experience), a valuable strategy is to develop “signature projects” - projects that become a recognizable part of their professional identity and demonstrate a unique skill or perspective. Instead of a scattered collection of unrelated projects, a concentrated series of related work creates a narrative of consistent development in a chosen area. For example, a developer specializing in JavaScript performance might develop a series of tools, libraries and articles focused on that topic, building a reputation as an expert in that niche.

A concrete technique for increasing project visibility is to combine code with educational content - from technical blogs explaining key concepts and decisions, to tutorials showing practical applications, to conference speeches and webinars. This multi-channel presence not only increases the project’s reach, but also demonstrates the ability to communicate effectively technically - a valued competency at higher career levels. Auth0, originally a personal open source project, gained popularity precisely because of the extensive educational materials that facilitated adoption and built a community around the solution.

For advanced developers (7+ years of experience), projects that demonstrate the ability to solve complex system problems beyond implementation become a valuable part of the portfolio. Instead of focusing solely on code, these projects can include broader aspects: architecture for complex business domains, designing APIs and ecosystems for third-party developers, managing advanced trade-offs between functional and non-functional requirements. Such projects can be complex libraries, frameworks or platforms that solve complex problems in an elegant, well-documented way.

The challenge for many craftsmen is finding time to develop meaningful portfolio projects alongside regular professional work and other commitments. A practical approach is to strategically combine professional and personal goals - identifying synergies between projects at work and developing a public portfolio. For example, craftsmen can negotiate with employers to open-source certain components or internal tools, which creates a win-win situation: the organization gains the benefits of open development (external contributors, reputation), and the developer can publicly demonstrate his or her skills.

Strategies for building a distinctive portfolio

  • Choosing projects with real impact and users instead of more implementations of standard applications

  • Combining code with educational content (blogs, tutorials, speeches) demonstrating technical communication capabilities

  • Documenting the thought process and architectural decisions, not just the final code

  • Develop a series of related projects in a selected niche instead of scattered, unrelated work

  • Strategically combine professional projects with public portfolio development through selective open-sourcing

  • Focus on demonstrating a unique perspective and approach, not just technical skills

Why will mentoring and community remain the pillars of the software craftsmanship movement?

Imparting tacit knowledge, which is not easily codifiable in courses or documentation, remains a fundamental value of mentoring. This type of knowledge includes subtle decision-making heuristics, pattern recognition in complex situations, or an intuitive sense of appropriate design trade-offs - competencies that are essential for effective craftsmanship, but extremely difficult to acquire through formal instruction or self-study alone.

The challenge for many organizations is to scale effective mentoring beyond one-on-one relationships, which are valuable but resource-limited. Shopify is experimenting with a model called “mentoring circles” - small groups (4-6 people) at similar career levels, supported by a single senior mentor. These circles combine the advantages of individual mentoring with the dynamics of peer learning, creating a space where learning occurs not only within the mentor-mentee relationship, but also between participants. An additional advantage is the building of community within the organization, which supports the retention of key talent.

For early-career developers (0-3 years of experience), actively seeking mentorship beyond formal programs is a key factor in accelerated development. A practical approach is to develop a “personal board of advisors” - a network of mentors who specialize in different areas (technical, career, soft skills), rather than searching for one “perfect mentor.” This diversified network provides a more comprehensive support and perspective, addressing different development needs.

Participation in communities of practice, both local and global, provides exposure to diverse perspectives and approaches that are difficult to obtain within a single organization. These communities serve as a space for the emergent evolution of standards and collaborative solutions to new challenges that predate formal publications and courses. An example is the Rust community, which developed advanced memory management and concurrency patterns before they were codified in formal documentation or literature.

The challenge for many professionals is finding the right communities, especially in niche specialties or smaller geographic centers. The pandemic has accelerated the development of virtual communities that transcend geographic barriers, but maintaining engagement and building deep relationships in the virtual space requires a different approach. Stack Overflow Engineering has developed a “distributed communities of practice” model that combines asynchronous communication channels (dedicated Slack channels, forums) with regular synchronous meetings (virtual pair programming, lightning talks), and periodic meetings in the real world when possible.

For mid-level developers (3-7 years of experience), a valuable evolution is the gradual transition from the role of knowledge recipient to active contributor to the community. This transformation not only deepens one’s own understanding (teaching is one of the most effective forms of learning), but also builds an expert reputation and professional network. Specific forms of this contribution can include leading local meetups, conference speaking engagements, publishing technical articles or mentoring juniors - each of which develops slightly different aspects of professional craftsmanship.

Emotional and psychological support in the face of the complexity of modern software development is becoming the third key aspect of the community of practice. As opposed to a purely technical exchange of knowledge, this dimension of the community addresses challenges such as imposter syndrome, professional burnout and stress management in a dynamic technological environment. A particularly valuable element of mature communities is the normalization of these experiences - the realization that even the most experienced professionals face similar psychological challenges.

Thoughtworks, a consulting firm known for its commitment to the Craftsmanship movement, systematically supports this layer through “vulnerability circles” - safe spaces where technologists can openly discuss their challenges, uncertainties and failures. These circles, initially controversial in a success-oriented tech culture, have proven to be a key element in building resilience and preventing burnout, especially among seasoned professionals who often struggle with hidden doubts and the pressure to maintain an expert image.

For advanced developers (7+ years of experience), consciously building and shaping a community becomes a form of technical leadership that transcends the formal organizational hierarchy. This “community steward” role requires not only technical expertise, but also skills in facilitation, conflict resolution, talent identification and development. In a dynamic IT environment, where formal organizational structures often fail to keep pace with evolving technologies and practices, these informal communities of practice become key nodes for knowledge transfer and innovation.