π§ Field & LAB Proven | π§ Interview-Ready | π OSS-Driven
π‘ Nokia 5G KPI Optimization & Interview Preparation β¬ Back to KPI Optimization
PM Counters:
NRRC.ConnEstabSucc.Sum drops 30% in 2 hours
Example (Hourly):
06:00β07:00 β 12,500
07:00β08:00 β 12,100
08:00β09:00 β 8,400
09:00β10:00 β 8,200
Alarms:
N/A (no hardware alarms)
Correlation:
NNGAP.InitCtxtSetupFail.Sum increases with cause
“radioNetwork-resource-not-available”
Example:
Normal hour β 220
Degraded hour β 1,150
Hourly Trend:
Degradation starts at 08:00 AM daily
Check RRC failure breakdown
Counters analyzed (hourly):
NRRC.ConnEstabFail.Sum
Normal hour β 1,050
Degraded hour β 4,200
NRRC.ConnFail_Congestion.Sum
Normal hour β 320
Degraded hour β 3,150
NRRC.ConnFail_Radio.Sum
Normal hour β 410
Degraded hour β 450
NRRC.ConnFail_Terminal.Sum
Normal hour β 320
Degraded hour β 350
NPRACH.SuccTotal
Normal hour β 9,800
Degraded hour β 9,750
Observation:
Majority of RRC failures are from NRRC.ConnFail_Congestion.Sum
NRRC.ConnFail_Radio.Sum remains stable
NPRACH.SuccTotal remains stable
Conclusion:
RRC degradation is not due to radio conditions or PRACH failures.
Root cause points to control-plane congestion.
Analyze control plane resource utilization
Counters analyzed during 08:00β10:00 AM:
NCCE.UtilDL.P95
Normal hour β 68%
Degraded hour β 92%
NPRACH.AttTotal
Normal hour β 10,400
Degraded hour β 14,800
NRRC.ConnRej.Sum
Normal hour β 480
Degraded hour β 2,900
NGAP.UECtxtRelReq.Sum
Normal hour β 310
Degraded hour β 1,780
Observation:
PDCCH CCE utilization crosses 90%
RRC rejections rise sharply
Core network context releases increase
Conclusion:
Control-plane resource saturation is confirmed during busy hours.
Check current mobility and access parameters
Parameters audited:
acBarringFactor
Current value β 0.95
rrcConnectionRejectWaitTimer
Current value β 1000 ms
maxConnectedUsers
Current value β 200
prachConfigurationIndex
Current value β 98
Audit window:
Last 7 days
Observation:
No parameter change detected
Values remained constant before degradation
Conclusion:
Issue is traffic-driven, not configuration-driven.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| acBarringFactor | 0.95 | 0.65 | Reduce access attempts during congestion |
| rrcConnectionRejectWaitTimer | 1000 ms | 500 ms | Faster retry for rejected UEs |
| maxConnectedUsers | 200 | 180 | Protect existing connections |
| prachFreqOffset | 0 | 12 | Spread RACH attempts across resources |
| ssbPerRACHOccasion | 8 | 16 | Better beam correspondence for initial access |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| RRC Setup Success Rate | 85.2% | 96.8% | +11.6% | Improved |
| RRC Reject Rate | 12.5% | 2.3% | -10.2% | Reduced |
| PDCCH CCE Utilization (P95) | 92% | 78% | -14% | Reduced |
| Average RRC Setup Time | 128 ms | 89 ms | -39 ms | Reduced |
| Initial Context Setup Failures | 8.2% | 1.1% | -7.1% | Reduced |
The sudden RRC Setup Success Rate degradation is caused by control-plane congestion during predictable busy hours.
By optimizing access control, retry timing, and RACH distribution, signaling load is stabilized and RRC performance is restored without hardware expansion.
PM Counters:
NRLF.Detected.Sum spikes in beams 2, 5, 8
Example (Hourly):
Alarms:
No RF alarms, but beam-specific failures observed
Correlation:
High NBFI.Count.Sum in same beams
Example:
Pattern:
Occurs during specific hours 18:00β22:00
Analyze beam failure patterns using the following counters:
Example observations (18:00β22:00):
Beam 2
Beam 5
Beam 8
Observation:
Conclusion:
High RLF is not caused by coverage loss but by beam instability.
Check beam management parameters for affected beams:
Parameters audited:
Observed configuration (Pre-Optimization):
Observation:
Conclusion:
Beam management configuration is not optimized for high-mobility or interference-prone scenarios.
Analyze inter-beam interference using:
Example observations:
Conclusion:
Significant inter-beam interference exists, leading to frequent beam failures and RLF.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| beamFailureRecoveryTimer | 100 ms | 50 ms | Faster beam recovery |
| beamFailureInstanceMaxCount | 5 | 3 | More sensitive beam failure detection |
| beamReportingPeriodicity | 160 ms | 80 ms | Faster beam reporting |
| ssbPeriodicity | 20 ms | 10 ms | More frequent beam sweeping |
| csiRsDensity | one | three | Denser CSI-RS for better beam management |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| Beam Failure Rate | 15.2% | 3.8% | β11.4% | Reduced |
| RLF due to Beam Failure | 8.5% | 1.2% | β7.3% | Reduced |
| Beam Switch Delay | 45 ms | 22 ms | β23 ms | Reduced |
| Beam Measurement Accuracy | 78% | 92% | +14% | Improved |
| User Throughput (affected beams) | 65 Mbps | 142 Mbps | +77 Mbps | Improved |
The high RLF rate was caused by beam instability combined with inter-beam interference during peak hours.
By optimizing beam recovery timing, reporting periodicity, CSI-RS density, and sweeping frequency, beam robustness improved significantly, resulting in reduced RLF and enhanced user throughput.
PM Counters:
NTHP.UlMacCellVol drops 40% during 18:00β21:00
Example (Hourly UL Throughput):
Alarms:
No hardware alarms
Correlation:
High NULInterference.Avg and NPUSCH.PowerHeadroom.Avg negative
Example:
Pattern:
Coincides with UL interference increase during peak hours
Analyze UL interference patterns using the following counters:
Example observations (hourly):
18:00β19:00
19:00β20:00
Observation:
Conclusion:
UL throughput degradation is driven by high uplink interference.
Check UL power control configuration parameters:
Parameters audited:
Observed configuration (Pre-Optimization):
Observation:
Conclusion:
UL power control configuration is not optimized for high-interference peak hours.
Analyze UL scheduler behavior using:
Example observations (18:00β21:00):
Observation:
Conclusion:
UL scheduler is stressed due to interference-driven retransmissions and power limitations.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| p0NominalPUSCH | β76 dBm | β70 dBm | Increase target power to overcome interference |
| alpha | 0.8 | 1.0 | Full path loss compensation |
| deltaMCS-Enabled | FALSE | TRUE | Enable MCS-based power adjustment |
| ulTargetBLER | 10% | 5% | More aggressive MCS selection |
| srsPeriodicity | 20 ms | 40 ms | Reduce SRS overhead for more PUSCH |
| bsrTimer | 20 ms | 10 ms | Faster BSR reporting |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| UL Throughput (Peak Hour) | 125 Mbps | 320 Mbps | +195 Mbps | Improved |
| UL PRB Utilization | 88% | 75% | β13% | Reduced |
| UL BLER | 15.2% | 6.8% | β8.4% | Reduced |
| PUSCH Tx Power Headroom | β2.5 dB | 3.8 dB | +6.3 dB | Improved |
| UL Interference | β92 dBm | β98 dBm | +6 dB | Improved |
PM Counters:
NHO.FailIntraFreq.Sum increases from 2% to 12%
Example (Daily Average):
Alarms:
No neighbor relation alarms
Correlation:
Failures concentrated in specific neighbor pairs
Example:
Pattern:
Affects cells with overlapping coverage areas
Analyze HO failures between specific cell pairs using:
Example observations (Top failing pairs):
CELL_A β CELL_B
CELL_C β CELL_D
Observation:
Conclusion:
HO triggering occurs too late in overlapping coverage scenarios.
Compare mobility parameters between problematic cells:
Parameters analyzed:
Example comparison (CELL_A vs CELL_B):
Observation:
Conclusion:
Mobility parameter mismatch is contributing to late HO execution.
Analyze measurement report quality using:
Example observations (last 6 hours):
CELL_A
CELL_B
Observation:
Conclusion:
Delayed and filtered measurements worsen late HO behavior.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| a3Offset | dB3 | dB2 | Earlier handover trigger |
| hysteresis | dB2 | dB1 | Reduce measurement filtering |
| timeToTriggerA3 | 480 ms | 320 ms | Faster reaction to changing conditions |
| cellIndividualOffset | 0 dB | +3 dB (for target) | Boost target cell attractiveness |
| filterCoefficientRSRP | fc4 | fc2 | Faster RSRP filtering |
| reportAmountA3 | infinity | 4 | Limit excessive reporting |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| Intra-Freq HO Success Rate | 87.5% | 98.2% | +10.7% | Improved |
| HO Failure (Too Late) | 6.2% | 0.8% | β5.4% | Reduced |
| HO Failure (Too Early) | 3.1% | 0.5% | β2.6% | Reduced |
| Ping-Pong HOs | 8.5% | 2.1% | β6.4% | Reduced |
| Average HO RSRP | β112 dBm | β105 dBm | +7 dB | Improved |
The intra-frequency HO failure increase was caused by late HO triggering due to mobility parameter mismatch and delayed measurement reporting in overlapping coverage areas.
After aligning A3 thresholds, reducing filtering, and optimizing reporting behavior, HO performance improved significantly with reduced failures and ping-pong events.
PM Counters:
NPDU.SessEstabFail.Sum for SNSSAI 010203 increases
Example (Hourly):
Alarms:
Slice resource utilization alarms observed
Correlation:
Failures occur when NSlice.RB.Util.SNSSAI_010203 > 80%
Example:
Pattern:
Affects only URLLC slice (SNSSAI 010203)
eMBB slice (SNSSAI 010101) remains unaffected
Analyze slice resource utilization and failures using the following counters:
Example observations (18:00β21:00):
URLLC Slice β SNSSAI 010203
eMBB Slice β SNSSAI 010101
Observation:
Conclusion:
PDU session failures are caused by URLLC slice resource exhaustion.
Check URLLC slice QoS configuration using:
Example audit results (SNSSAI 010203):
Observation:
Conclusion:
QoS misalignment contributes to session establishment failures under load.
Analyze admission control decisions for URLLC slice using:
Example observations (last 2 hours):
Observation:
Conclusion:
Admission control thresholds are too restrictive for URLLC traffic.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| sliceMaxRBPercentage | 20% | 30% | Increase resource allocation for URLLC |
| guaranteedFlowBitRateUL | 10 Mbps | 50 Mbps | Increase guaranteed rate for URLLC |
| packetDelayBudget | 20 ms | 10 ms | Tighter delay budget for URLLC |
| preemptionCapability | may-not-preempt | may-preempt | Allow URLLC to preempt eMBB |
| preemptionVulnerability | preemptable | not-preemptable | Protect URLLC from preemption |
| 5qi6MaxRetxThreshold | 4 | 2 | Fewer retransmissions for lower latency |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| URLLC PDU Session Success Rate | 71.5% | 99.2% | +27.7% | Improved |
| URLLC Slice RB Utilization | 92% | 75% | β17% | Reduced |
| URLLC Latency (5QI 6) | 28 ms | 12 ms | β16 ms | Reduced |
| URLLC Packet Loss Rate | 1.8% | 0.1% | β1.7% | Reduced |
| eMBB Impact (Throughput) | 0% | β8% | β8% | Acceptable |
The PDU session establishment failures were caused by URLLC slice resource exhaustion combined with misaligned QoS and admission control policies.
After increasing URLLC resource allocation, enabling preemption, and tightening QoS parameters, URLLC session success rate and latency improved significantly with minimal acceptable impact on eMBB traffic.
PM Counters:
High NMCS.Avg (24β27) but low NMIMO.Rank.Avg (1.2β1.5)
Example (Affected UE Categories):
Alarms:
No MIMO hardware alarms
Correlation:
Occurs when NUL.SRS.SNR.Avg < 5 dB
Example:
Pattern:
Affects specific UE categories (e.g., Category X)
Analyze MIMO and SRS performance correlation using the following counters:
Example observations (per UE category):
UE Category X
UE Category Y
Observation:
Conclusion:
DL throughput degradation is caused by poor uplink channel sounding quality, not modulation limitation.
Check SRS configuration for different UE categories:
Parameters audited:
Example configuration (Category X):
Observation:
Conclusion:
SRS configuration is insufficient to support higher MIMO ranks.
Analyze channel correlation metrics using:
Example observations (last 6 hours):
High Correlation
Medium Correlation
Low Correlation
Observation:
Conclusion:
High channel correlation combined with poor SRS quality limits rank adaptation.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| srsBandwidth | BW4 | BW2 | Wider SRS for better channel estimation |
| srsPeriodicity | 20 ms | 5 ms | More frequent SRS for fast-changing channels |
| srsMaxPorts | 2 | 4 | Enable more SRS ports for better MIMO |
| codebookSubsetRestriction | fully-restricted | partially-restricted | Allow more precoding flexibility |
| pmiRiReportPeriodicity | 80 ms | 20 ms | Faster PMI/RI reporting |
| csiRsDensity | one | three | Denser CSI-RS for better channel estimation |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Trend |
|---|---|---|---|---|
| Average Rank | 1.3 | 2.8 | +1.5 | Improved |
| DL Throughput (Category X UEs) | 185 Mbps | 420 Mbps | +235 Mbps | Improved |
| SRS SNR | 4.2 dB | 8.5 dB | +4.3 dB | Improved |
| MIMO Layer Utilization | 32% | 68% | +36% | Improved |
| CQI Reporting Accuracy | 65% | 88% | +23% | Improved |
The DL throughput degradation occurred due to poor uplink sounding reference quality, which limited accurate MIMO rank estimation despite high MCS values.
After optimizing SRS bandwidth, periodicity, reporting frequency, and CSI-RS density, MIMO rank utilization improved significantly, resulting in substantial DL throughput gains.
PM Counters:
NDelay.UP.E2E.5QI_79.P95 spikes from 25 ms to 65 ms during evening hours
Example (Hourly P95 Latency):
Alarms:
βPacket Delay Threshold Exceededβ for 5QI = 79
Correlation:
High NRLC.ReasTimeout.Sum and NHARQ.Retx.Avg
Example:
Pattern:
Coincides with peak gaming traffic during 18:00β23:00
Decompose E2E latency by protocol layer using:
Example observations (per minute, peak hour):
Observation:
Conclusion:
Latency spike is mainly caused by RLC retransmissions and HARQ retries under peak load.
Analyze gaming traffic characteristics using:
Example observations:
5QI = 79 (Gaming / AR)
5QI = 80
5QI = 6
Observation:
Conclusion:
Default QoS handling is not optimal for bursty, latency-critical gaming traffic.
Check gaming QoS policy configuration using:
Example configuration (5QI = 79):
Observation:
Conclusion:
QoS policy is not tuned for ultra-low latency gaming services.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| pdcpSnSize (5QI=79) | 18 bits | 12 bits | Reduced SN overhead for gaming packets |
| rlcMode (5QI=79) | AM | UM | Eliminate RLC retransmission delay |
| dlDataSplitThreshold | 100 bytes | 50 bytes | Faster transmission of small gaming packets |
| harqMaxRetx (5QI=79) | 4 | 2 | Fewer retransmissions for latency-sensitive traffic |
| spsInterval (5QI=79) | disabled | 10 ms | Semi-persistent scheduling for periodic gaming traffic |
| drxInactivityTimer | 20 ms | 5 ms | Shorter inactivity for responsive gaming |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Impact |
|---|---|---|---|---|
| 95th Percentile Latency (5QI=79) | 65 ms | 28 ms | β37 ms | Significant Improvement |
| Packet Delay Variation (Jitter) | 22 ms | 8 ms | β14 ms | Excellent |
| Gaming Packet Loss Rate | 2.1% | 0.4% | β1.7% | Excellent |
| RLC Reassembly Timeouts | 8.5% | 1.2% | β7.3% | Excellent |
| HARQ Round Trip Time | 12 ms | 8 ms | β4 ms | Good |
| Overall Cell Throughput | β | β2% | β2% | Minor Impact |
Latency spikes for gaming and AR services (5QI=79) were caused by RLC retransmissions, excessive HARQ retries, and non-optimized QoS policies during peak gaming hours.
After switching to RLC UM, reducing retransmissions, enabling SPS, and optimizing PDCP and DRX parameters, latency and jitter were significantly reduced with only a minor, acceptable impact on overall cell throughput.
PM Counters:
NBLER.DL.Avg consistently > 15% (threshold: 10%)
Example (Hourly Average):
Alarms:
βRadio Link Quality Degradedβ alarm active
Correlation:
High NRLC.RetxDL.Sum and low NCQI.Avg
Example:
Pattern:
Affects all UEs in sector 2, not localized to specific users or locations
Analyze BLER patterns across UE categories using:
Example observations (Sector 2):
UE Category 4
UE Category 6
Observation:
Conclusion:
Issue is cell-wide link adaptation, not UE-specific radio coverage.
Check link adaptation effectiveness using:
Example observations (hourly):
18:00β19:00
19:00β20:00
Observation:
Conclusion:
Link adaptation loop is not reacting fast enough to channel degradation.
Analyze RF parameters and beam performance using:
Example audit findings (Top contributors):
Observation:
Conclusion:
RF and link adaptation parameters are tuned too aggressively for macro coverage.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| pdschTargetBlerDl | 10% | 5% | More conservative target for better reliability |
| cqiTableIndex | 1 (256QAM) | 2 (64QAM) | Use more robust CQI table |
| mcsTable | 256QAM | 64QAM | Conservative MCS for better BLER |
| dlAlpha (OLPC) | 0.8 | 0.6 | More conservative outer loop power control |
| initialMcsDl | 20 | 15 | Start with lower MCS for new connections |
| csiReportPeriodicity | 80 ms | 40 ms | Faster CSI feedback for better adaptation |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Impact |
|---|---|---|---|---|
| Average DL BLER | 16.8% | 7.2% | β9.6% | Excellent |
| RLC DL Retransmissions | 18.5% | 8.2% | β10.3% | Excellent |
| Average CQI | 8.2 | 10.5 | +2.3 | Good |
| DL Throughput | 320 Mbps | 280 Mbps | β40 Mbps | Acceptable Trade-off |
| User Experience (MOS) | 3.2 | 3.9 | +0.7 | Improved |
| RLF Rate | 5.2% | 2.1% | β3.1% | Excellent |
The persistent high DL BLER in the macro cell was caused by over-aggressive link adaptation and RF parameter configuration, not by poor coverage or UE limitations.
After adopting more conservative BLER targets, robust CQI/MCS tables, faster CSI feedback, and tuned power control, DL reliability improved significantly with an acceptable throughput trade-off.
PM Counters:
NMOS.Avg.5QI_1 drops from 4.1 to 3.2
Example (Hourly Average):
Alarms:
βVoice Quality Degradationβ alarm active for multiple cells
Correlation:
High NPDV.5QI_1.StdDev (>20 ms) and NPacketLoss.5QI_1.Avg (>2%)
Example:
Pattern:
Affects handover regions between CELL_12, CELL_13, CELL_14
Correlate MOS with underlying metrics using:
Example observations (last 2 hours):
CELL_12 β CELL_13
CELL_13 β CELL_14
Observation:
Conclusion:
VoNR quality degradation is driven by packet loss, jitter, and inefficient header compression, especially during handovers.
Analyze VoNR quality degradation during handovers using:
Example observations:
Intra-Freq HO
Inter-gNB HO
Observation:
Conclusion:
Handover execution time and interruption directly impact VoNR MOS.
Check ROHC compression efficiency and failures using:
Example observations:
UE Category 3
UE Category 6
Observation:
Conclusion:
ROHC inefficiency contributes to packet loss and jitter during mobility.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| rohcMaxCid | 5 | 15 | More compression contexts for concurrent VoNR calls |
| rohcProfile | 0x0001 | 0x0006 | Use optimized profile for voice traffic |
| ttiBundling (5QI=1) | disabled | enabled | TTI bundling for better UL coverage in voice |
| ulTargetBler (5QI=1) | 10% | 1% | Ultra-low BLER target for voice |
| spsInterval (5QI=1) | disabled | 20 ms | SPS for consistent voice packet scheduling |
| hoExecutionTimer | 1000 ms | 500 ms | Faster handover execution for voice |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Impact |
|---|---|---|---|---|
| Average MOS Score | 3.2 | 4.0 | +0.8 | Excellent |
| Packet Loss Rate (5QI=1) | 2.5% | 0.3% | β2.2% | Excellent |
| Jitter (Packet Delay Variation) | 25 ms | 8 ms | β17 ms | Excellent |
| ROHC Compression Ratio | 1.8:1 | 3.5:1 | +1.7Γ | Excellent |
| Handover MOS Drop | 0.8 | 0.2 | β0.6 | Excellent |
| VoNR Call Drop Rate | 1.8% | 0.4% | β1.4% | Excellent |
The VoNR MOS degradation in dense urban areas was caused by handover-induced packet loss, high jitter, UL BLER, and inefficient ROHC compression.
By optimizing ROHC contexts, enabling SPS and TTI bundling, tightening UL BLER targets, and reducing HO execution time, VoNR quality was restored to near-ideal levels across all affected cells.
PM Counters:
NDelay.UP.E2E.5QI_80.P99 > 50 ms (requirement: 20 ms)
Example (Latency Distribution):
Alarms:
βURLLC Service Level Agreement Violationβ
Correlation:
High NPDCP.ReorderingDelay.Avg and increased scheduling delays
Example:
Pattern:
Affects specific time-critical industrial applications (robot control, motion control)
Analyze URLLC traffic characteristics using:
Example observations (last 1 hour):
Motion Control Application
PLC Control Application
Observation:
Conclusion:
Latency spikes are driven by tail latency accumulation, not average delay.
Check scheduling behavior for URLLC traffic using:
Example observations (INDUSTRIAL_CELL_01):
Scheduler: Proportional Fair
Scheduler: QoS-Aware
Observation:
Conclusion:
Scheduling priority for URLLC is insufficient during congestion.
Break down URLLC latency components using:
Example observations (last 30 minutes):
PDCP Reordering
MAC Scheduling
HARQ Processing
Observation:
Conclusion:
End-to-end URLLC latency violation is caused by scheduler delay + PDCP reordering.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| pdcpDuplication (5QI=80) | disabled | enabled | Packet duplication for ultra-reliability |
| maxHarqTx (5QI=80) | 4 | 8 | More HARQ retransmissions for reliability |
| logicalChannelGroup (5QI=80) | 1 | 0 | Highest scheduling priority |
| prioritisedBitRate (5QI=80) | 0 | 1000 kbps | Guaranteed bit rate for URLLC |
| bucketSizeDuration (5QI=80) | 100 ms | 10 ms | Smaller bucket for bursty URLLC traffic |
| schedulingRequestId (5QI=80) | 1 | 0 | Highest priority SR |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Impact |
|---|---|---|---|---|
| 99th Percentile Latency (5QI=80) | 52 ms | 18 ms | β34 ms | Excellent |
| 99.9th Percentile Latency | 85 ms | 25 ms | β60 ms | Exceptional |
| Reliability (1-Packet Loss) | 99.9% | 99.999% | +0.099% | Excellent |
| PDCP Duplication Overhead | 0% | 100% | +100% | High Cost |
| eMBB Throughput Impact | 0% | β15% | β15% | Acceptable |
| URLLC SLA Compliance | 65% | 98% | +33% | Excellent |
The URLLC latency SLA violation was caused by scheduler prioritization gaps and PDCP reordering delays, which primarily impacted tail latency (P99 / P99.9).
By enabling PDCP duplication, enforcing strict scheduling priority, increasing HARQ reliability, and optimizing bucket and SR parameters, URLLC latency and reliability were restored to industrial-grade requirements with an acceptable trade-off on eMBB throughput.
PM Counters:
Sector-specific high BLER in NBLER.DL.Beam_X.Avg
Example (Top Impacted Beams):
Alarms:
βBeam Quality Degradationβ on specific beams
Correlation:
Low NMIMO.Rank.Avg and poor NCQI.Beam_X.Avg
Example:
Pattern:
Affects users located in specific angular sectors
Analyze performance by beam index using:
Example observations (MIMO_CELL_03):
Beam 7
Beam 11
Observation:
Conclusion:
BLER degradation is linked to beam-level MIMO behavior, not RF coverage.
Check MIMO and beamforming configuration using:
Example audit results:
Observation:
Conclusion:
MIMO configuration is over-optimized for peak throughput, causing BLER instability.
Analyze channel correlation for MIMO performance using:
Example observations (last 6 hours):
High Correlation
Medium Correlation
Low Correlation
Observation:
Conclusion:
Channel correlation directly impacts MIMO efficiency and BLER.
| Parameter | Pre-Optimization | Post-Optimization | Rationale |
|---|---|---|---|
| codebookSubsetRestriction | fully-restricted | partially-restricted | More precoding flexibility |
| csiRsDensity | one | three | Denser CSI-RS for better channel estimation |
| beamReportingPeriodicity | 160 ms | 40 ms | Faster beam reporting for mobility |
| rankIndicatorRestriction | rank-4-allowed | rank-2-only | Conservative rank for better BLER |
| pmiRiReportPeriodicity | 80 ms | 20 ms | Faster PMI/RI reporting |
| srsBandwidth | BW4 | BW8 | Wider SRS for better UL channel estimation |
| KPI | Pre-Optimization | Post-Optimization | Ξ | Impact |
|---|---|---|---|---|
| Average DL BLER | 14.2% | 6.8% | β7.4% | Excellent |
| MIMO Rank Utilization | 2.8 | 2.2 | β0.6 | Acceptable |
| Beam Switching Success Rate | 88% | 96% | +8% | Good |
| CSI Reporting Accuracy | 72% | 89% | +17% | Excellent |
| Cell Throughput | 850 Mbps | 720 Mbps | β130 Mbps | Trade-off |
| User Consistency Index | 65% | 82% | +17% | Excellent |
The high BLER in the Massive MIMO cell was caused by beam-specific MIMO misconfiguration and high channel correlation, not by coverage or hardware faults.
By increasing CSI-RS density, improving reporting periodicity, relaxing precoding restrictions, and enforcing conservative rank selection, BLER and user consistency improved significantly with an acceptable throughput trade-off.