If you are leading, are a part of, or are just considering an EA practice, the maturity model is intended to help you assess what's in place and identify next steps. Over the next year we'll be linking in more contributions and experiences from the Itana community.
What makes this model higher education specific? We had many lively discussions in our working group and learned a lot about each others' institutions and EA teams. I think our most fundamental observation was that the range of EA opportunities, contexts, and teams in higher education is extremely broad. This is reflected in the model in a number of ways.
First, we agreed that we could not select an optimal scope and goals for EA that would apply to even a majority of institutions. So we made the need to define Scope an attribute of maturity, but we left open the actual scope of each EA practice. (To help practitioners think about possible scopes, I drafted a supplemental guide to Scoping the EA Practice.)
This might seem strange at first, but I think it can help organizations overcome a major stumbling block for new EA practices: the grand scope of generic definitions of EA. These tend to portray EA as inherently working across the whole enterprise and including technical, data, and business architecture. From what I've seen, EA practices in higher education that try to embrace this idealized scope set unrealistic expectations, fall short, and frustrate themselves and their stakeholders. (Even more so because in higher education, EA is usually a collaborative undertaking without enterprise-level sponsorship.)
Second, we designed the maturity levels to acknowledge that an EA practice in higher education can take a long winding path. Our level 1 of maturity is Initiating, and unlike in some maturity models, this level isn't inherently bad. It can take years to get an EA practice formed, and useful work can be done by would-be EAs during this time. (Along the same lines, our level 5 of maturity is an open-ended Improving, which could look different at each institution.)
Third, we included maturity attributes that it might be reasonable to take for granted in other industries, but that are easy to overlook in higher education. Just to briefly point these out:
- The Engagement attribute describes how the EA practice engages stakeholders, because as already noted EA in higher education is usually built on influence, not authority.
- The Impact Assessment attribute describes how the EA practice measures its performance, because performance is often left unmeasured in higher education.
- The Delivery attribute describes the means by which the EA practice delivers value.
- The Management attribute describes how the EA practice manages itself, because effective planning and management of a program is a challenge in higher education.
I suspect an observer from outside higher education might look at this maturity model and ask: "What good is this? You've left the model without even a common definition of EA, and instead spent most of your time on stuff that's true for any program. Besides, what kind of organization wouldn't define scope and measure value? That's all self-evident."
To which I would respond: Exactly! Given how EA typically arises in higher education (bottom-up, situated in IT, without broad sponsorship, built through earned influence), and given typical operating models in higher education institutions (relatively decentralized and informally managed) -- given all that, this model helps EA practices define what they do and focus on what will help them be more effective.
Of course, it's just our first version and we'll be listening closely as practitioners apply the model and share their experiences. Looking forward to the conversation!
No comments:
Post a Comment