The 6 Completely Predictable Reasons Your Traditional PLM Is Failing You

Technology / The 6 Completely Predictable Reasons Your Traditional PLM Is Failing You
Traditional PLM is a sinking ship

Summary: Traditional PLM systems are large, difficult projects that are prone to failure. However, a cloud PLM system can mitigate many of the challenges associated with PLM implementation with an inexpensive, standardized approach that offers out of the box functionality from day one.


Projects fail. Everyone knows this. But what you might not know is that traditional PLM projects a lot more than most projects.

Traditional PLM solutions are complex software projects. They involve lots of stakeholders, lots of customization, and usually require an on-prem deployment. Implementation fees are high, and long project timelines are the norm.

But that doesn’t mean that companies just have to accept that they’re going to shell out $10 million for a system that might work (eventually).

We’ve dug into why traditional, on-prem PLMs fail so often and so spectacularly (plus why a cloud PLM system might be the solution).

1. Over-customization

Traditional PLM vendors are all about customization. And on the face of it, this is great! After all,  no one wants to be one of a hundred colours in a box.

But the truth is, over-customization in PLM projects creates a world of problems.

Difficult and time-consuming to deploy

Deployment takes longer than it should. Instead of a PLM organization doing 90% of the coding once and then 10% for each new client, it’s like they’re doing 10% of the coding once and then 90% for every new client.

There are no economies of scale, so costs are high and project timelines are long.

Difficult to update

Because of hard-coded custom integrations, traditional PLM solutions have ongoing maintenance problems. On-premise deployments (more on those in a minute) and solutions that are basically hacked together make updates difficult. Fixing one thing breaks everything else.

Not future-proofed

Technology is always changing. But as soon as an interaction is hard-coded for an organization, it’s like setting that interaction in cement. For instance, let’s say a manufacturer uses an ERP, and their PLM needs to integrate with it.

“No problem!” says the PLM provider. “Klaus our coding pro can whip up an integration and deploy it in just two months!”

But what happens when that organization wants to switch their ERP? Their only option is to either:

  1. Call Klaus the Coder and get him to come back, untangle his code, and build a new integration for the new ERP, or…
  2. Don’t switch the ERP at all.

Neither of these are ideal.

2. On-prem implementation

On-premise or on-prem implementation is the weapon of choice for major PLM providers.

The reason, ostensibly, is that on-prem solutions are more secure. After all, on-prem solutions involve getting a server and putting it deep in a basement of your head office:

  • You get control over the physical environment
  • Your stuff is the only stuff on the server, so you control who (or what!) it connects to
  • You have control over your network.

But there’s a catch.

First, it means you have to maintain it with a server expert. It’s difficult to find one and expensive to keep one.

Second, it makes it extremely difficult to scale up a product. If your organization grows and needs more PLM processing power, you need to physically build more servers.

But mostly, on-prem solutions add enormous risk to a project. The organization is responsible for more of the work associated with implementation, which means there’s more opportunity for something to go wrong.

3. Traditional PLM solutions suffer scope creep

Traditional PLM programs always start out so full of optimism. Users and C-suite executives sit around and dream up all the things they want their PLM system to do.

The construction starts, and someone says “oh but it’d be great if…”

BOOM. Scope creep.

Because of their complexity, diverse user base, and an ‘in for the penny, in for the pound’ mentality, PLM projects are extremely prone to people saying ‘oh, we can just add this in at the same time!’

Particularly because the doors of custom solutions are open, literally nothing is off the table.

In the end, traditional PLM systems become behemoths.

4. Improper integration

The value of a PLM isn’t in the product itself, but rather, in what it integrates with.

Integration with an organization’s other key systems like an ERP, CRM, and SCM are critical.

We like to think that all our business tech is effectively ‘plug and play’ — you just come along with your new PLM software and plug it into everything else like you’re setting up a TV.

That’s not a reality though. These integrations are complex and difficult for a number of reasons.

Organizations rely on all sorts of systems, some from different vendors, some they built themselves, and some they purchased and then customized beyond belief. It’s not like you’re building a single complicated plug, it’s like you need to build dozens of complicated plugs.

Second, these systems emerged over time. Standards change and technology evolves. It means integration isn’t just linking together two systems, it’s often linking together two different generations of technology.

So the task is hard right out of the gate. But traditional PLM products are uniquely unsuited to it.

Their business model relies on heavy customization and consultation, so they’re not even incentivized to make integration easy. As a result, the PLMs themselves tend to be clunky.

PLM system providers will often have one solution to these problems: a full software suite. If you get the SAP PLM and ERP, you’ll probably be able to integrate them easily. But it leaves you on the hook to sticking to one vendor (even if the product isn’t quite right), and leaves organizations who choose to use multiple suppliers or who can’t afford a $20,000 solution out to dry.

5. Complex, unusable UIs

PLMs are traditionally used by engineers and designers only.

But the value of the data connected by a PLM extends far beyond those teams. Business units like procurement, sales, marketing, managers, administrators, and executives all benefit from both seeing and contributing to the engineering/design process.

But traditional PLM software solutions fail to build user interfaces (UIs) that non-technical staff can actually understand.

This means that organizations spend all this money on a PLM system where they’re only extracting maybe 10% of the total value.

6. Static workflows

Traditional PLM implementations go like this:

  1. PLM consultants meet with users, executives, and technical staff to understand what they want out of a PLM product.
  2. Learn existing business processes, systems, and the required integrations.
  3. Take your ‘out of the box’ system and custom-code all the different connections to systems and data structures.
  4. Review existing workflows, and build in optimized PLM answers to make those workflows easier.
  5. Collect your cheque and leave.

But what happens when those workflows change? What happens as businesses evolve, their needs change, or they restructure their organization?

Suddenly, out of date workflows are effectively set in stone. They’re difficult to change and persist long after they’re neither relevant nor optimized. Inevitably, workflows end up being abandoned for shortcuts, and our good friends manual process and Excel creep back into how users actually do their job.

Suddenly, we’re back to where we started with time-consuming, manual, error-prone, and inefficient processes.

So how can a cloud PLM system help?

Cloud PLM system Fix One: Out of the box functionality

Cloud PLM system providers on a software-as-a-service (SaaS) business model can’t afford a long implementation cycle. They don’t have legions of consultants on-hand and their dev teams are focused entirely on making the product better, not on coding one little integration.

The result is a lot less customization. It’s more of a ‘here’s the solution. Take it or leave it.

Which actually solves a bunch of PLM problems:

  • Because the product has to work right out of the box, integration with a range of business systems has to work right away.
  • Legacy data integration has to work right away
  • The product is easy to update (since it’s standardized) and easy to replace
  • Project scope is clearly defined from the start.

Cloud PLM system Fix Two: Low-cost cloud hosting

There’s no comparing the cost of cloud-hosted products and on-prem. No matter how you slice it, cloud is a cheaper, faster, and cleaner solution.

You effectively outsource your hosting to a pro. You don’t need to hire a technical specialist, build a secure and controlled environment, or try and scale by spinning up new servers. Your cloud host does all that work for you. You never have to worry about downtime because 99.99% uptime is the standard for everyone now. And because you don’t need to spend $10,000 on servers, you can reduce your implementation costs and time by about 95%.

Cloud PLM system Fix Three: Modular deployment

Cloud-hosted PLMs systems have a modular structure: they’re built so clients can deploy them piece by piece, building up a robust system over time. For clients, it means:

  • They can start with a basic PLM system, then add in additional plugins and integrations over time to better leverage their data.
  • Changes can be made without totally rebuilding the product.

Because systems can be easily changed/improved over time, there’s far less incentive to do everything at once. For instance, a Cloud PLM system provider might offer a PLM and an ERP solution. A client might buy both, but find the ERP solution isn’t working for them. They can easily disconnect the ERP part and migrate to a different ERP provider while retaining the full PLM functionality and without the sunk cost of an expensive implementation.

Project scope is kept in check, project timelines are preserved, and the product is flexible enough to change and adapt over time, insulating organizations against future changes across their tech stack.

Wrap up

Implementing a new PLM is never an easy task. It’s not like getting a new office chair — it’s complicated, it takes a clear understanding of objectives and requirements, and it needs to be an acquisition that can stand up over time.

But the problem is that traditional PLM implementations fail, and fail often.

Over-customization, on-prem implementation, creeping costs and poor integration, migration, and usability all contribute to their failure.

Major organizations abandon PLM projects after millions of dollars and years of work turn into huge, complex projects that just suck up resources with no end in sight.

But it doesn’t have to be that way.

Cloud PLM software lowers the bar of entry by reducing implementation costs, reducing customization requirements, and actively combatting problems like scope creep and integration challenges that plague traditional implementations.


Image credit: Bernhard_Staerck  via Pixabay

2018-12-12T11:30:33+00:00