delivering a modern logistics information system one bit at a time.

Deployment lessons from Army Logistics Modernization Program. Last week Larry Allen sent along a pointer to a GAO report on the Army’s logistics modernization program. That system is based on an SAP implementation. Just like the GAO reports we’ve seen on the Navy’s ERP implementation, this one has lessons we can apply to CG-LIMS. I encourage everyone on the team to read the “Highlights,” the one-page summary of their findings. Here is GAO’s bottom line recommendations: In order to improve the third deployment of LMP, GAO is recommending that the Secretary of the Army direct the Commanding General, Army Materiel Command, to (1) improve testing activities to obtain reasonable assurance that the data used by LMP can support the LMP processes, (2) improve training for LMP users, and (3) establish performance metrics to enable the Army to assess whether the deployment sites are able to use LMP as intended. The Army concurred with our recommendations. Below are what I think are the top five nuggets from the report that we can learn from: Although the depots were able to continue to repair items
and support the warfighter, LMP users had to rely on manual work-around processes , which are not part of how LMP is intended to function and hinder the Army’s ability to realize the benefits expected from LMP…. The depots experienced data quality issues, despite improvements the Army made to address data quality issues experienced during the first deployment of LMP at Tobyhanna Army Depot, because the Army’s testing strategy did not provide reasonable assurance that the data being used by LMP were accurate and reliable . Specifically, the Army’s testing strategy focused on determining whether the software worked as designed, but did not assess whether LMP was capable of functioning in a depot environment using the actual data from the depots. Additionally, the Army’s training strategy did not effectively provide users the skills necessary to perform all of their tasks in LMP. Users at the depots stated that the training they received before LMP became operational was not conducted in a realistic environment that showed them how to perform their expected duties…. In order to assess the functional readiness of the software, the Army used simulated test data to test the system. For example, when assessing the functional readiness of the software to perform an induction of an item for repair, Army officials told us that they did not attempt to induct an item for repair using the data loaded into LMP. Instead, the Army tested whether LMP could perform an induction , and performed these tests using simulated data so that developers would know whether LMP could provide the intended capability.

While this approach is useful and desirable to determine whether the software can operate as expected, it does not assess whether the data are of sufficient quality to work in LMP…. Although the Army’s training strategy was designed to provide LMP users the skills and knowledge to successfully perform their new roles, LMP users we interviewed at Corpus Christi and Letterkenny Army Depots stated that the training they received prior to LMP becoming operational did not fully meet their needs . LMP users we interviewed at Corpus Christi and Letterkenny Army Depots stated that the training focused on what LMP was supposed to do rather than on how they were to use the system to perform their day-to-day missions. Additionally, because the duties of some LMP users at the depots were changing, the training users received was not always commensurate with the responsibilities they were assigned. Consequently, some LMP users told us that they did not always understand the actions they needed to perform in order to accomplish their assigned tasks…. We also found that the LMP program management office’s scorecard did not accurately reflect the internal assessment of LMP implementation at the depots . For example, Letterkenny Army Depot developed and used a scorecard to measure progress of LMP implementation, which included more than 50 processes that end users had to perform in LMP covering areas such as supply, maintenance, and finance. According to officials at Letterkenny Army Depot, a process was identified as “green” once the user had successfully performed the task in LMP using the envisioned processes. However, the progress as tracked by the depot did not match the progress as reported on the LMP program management office’s scorecard. For example, on May 26, 2009, Letterkenny Army Depot had identified 48 of its processes as “red” because the depot either had not yet performed the function in LMP or was unable to perform it successfully in LMP using the envisioned processes. However, on the same day, the LMP program management office reported that LMP was “green” in all elements measured by the LMP program management office’s scorecard. These differences reflect the lack of a comprehensive set of metrics for measuring the success of LMP implementation, because while the LMP program management office was measuring whether the software was working, Letterkenny Army Depot had identified that it was unable to conduct its daily operations using LMP as envisioned. Don’t read any of this as criticism of another program. This is hard. GAO reports provide another opportunity for us to learn lessons from the hard road other programs have traveled. If there’s anything you want to make sure folks read, feel free to copy and paste the excerpt as a comment.
Комментариев нет:
Отправить комментарий