前言:此文是自己解这个问题的一个记录,您看到的时候可能还没有结论,因此希望从中获得最终解决方案的同仁还请在看之前三思;如果是看解决思路的,那么请接着往下看:
最近遇到一个比较难解的问题:
设备老化后EMMC的读写性能衰减30%以上;
具体复现步骤为:
1、刷机后开机;
2、静置手机一段时间,等待进程稳定;
3、使用AndroBench跑IO性能;
4、使用内部老化工具填充手机,达到90%,再删除到60%左右;
5、再次填充到90%;
6、再次跑AndroBench,发现写入速度衰减厉害(顺序写入与随机写入分别衰减24%与36%)
解读:
总所周知,EMMC存储高占用时IO性能肯定是有衰减的,但是此题诡异在于:
1、如果占用空间是一个大文件,那么衰减幅度没有这么大;
2、如果占用空间是一个大文件,即便有大幅度衰减,也是概率性的;
3、而此题填充后的环境,重启后问题依旧存在;
4、甚至在完全删除填充数据后问题依旧存在;
出现此问题的机器配置如下:
SoC:MT6761D (Cortext-A53@1.8GHz x 4)
Memory:2GB + 32GB
OS:Android Q
File System:F2FS
Data Encryption:FBE
做过的一些比较费时但是又没有结果的尝试:
1、使用FDE加密方式,问题依旧存在;
2、文件系统换回EXT4,问题依旧存在;
分析:
1、出于经验判断,首先需要排除的是硬件限制。因为若是器件原因导致,后面的所有分析都是徒劳;
所以首先离器件最近的blockio信息查看(路径/d/blockio或/d/blocktag/mmc/blockio或/sys/kernel/debug/blockio或/sys/kernel/debug/blocktag/mmc/blockio):
速度正常的情况下,blockio记录的写入信息如下:
[ 3771.970446]mmc.q:0.wt:80203,14536704,177.wl:17%,177573164,1000334925,3549.vm:995664,40,2992780,0,2990396,13222....
[ 3772.970512]mmc.q:0.wt:85263,15192064,174.wl:17%,174295678,1000066310,3709.vm:995664,40,2992780,0,2990396,13222....
[ 3773.970946]mmc.q:0.wt:86204,15536128,176.wl:17%,176434802,1000432849,3787.vm:995664,24,2992796,4,2990424,13222....
[ 3774.971025]mmc.q:0.wt:85943,15577088,177.wl:17%,177265870,1000079156,3803.vm:995664,24,2992796,0,2990428,13222....
而出现性能衰减时的信息如下:
[ 739.357804]mmc.q:0.wt:83355,10072064,118.wl:11%,118875025,1014868695,2459.vm:960640,56,1105952,0,1367124,12449....
[ 740.390092]mmc.q:0.wt:89297,10424320,114.wl:11%,114144052,1032285848,2544.vm:960640,28,1105960,36,1367124,12449....
[ 741.391601]mmc.q:0.wt:88104,10285056,114.wl:11%,114994275,1001509541,2486.vm:960648,0,1106056,0,1367284,12449....
[ 742.422869]mmc.q:0.wt:83245,10399744,122.wl:11%,122012438,1031268926,2539.vm:960640,44,1106104,0,1367288,12449....
各个字段的解读可参考MTK online上的FAQ21831,现截取如下:
Type | Format | Examples | Description |
Storage Type and Request Queue | (ufs|mmc).(0~9) | mmc.q:0. mmc.q:1. ufs.q:0. |
mmc.q:0 => eMMC, mmc.q:1 => T-Card,ufs.q:0 => UFS |
Workload | wl:(0~99)% | wl:49% | Percentage of time that UFS/MMC driver is executing I/O. wl:49% => ~490ms out of 1000ms (49%) is exectuing I/O. |
Write Throughput | wt:speed,size,time | wt:2442,6004736,2400 | speed: KB/s size: written size in bytes |
Read Throughput | rt:speed,size,time | rt:38805,27418624,690 | speed: KB/s size: Read size in bytes |
Virtual Memory Statu | vm:fp,fd,nd,wb,nw | vm:0,178336,22776,0,59272 | Storage related virtual memory statistics in terms of KB FilePages(fp): number of pages used as file cache FileDirty(fd): number of delta dirty file pages that needs write to disk. NumDirtied(nd): accumulated number of dirtied pages. WriteBack(wb): number of pages that are writing back to disk Num Written(nw): accumulated number of written pages. |
Page PID logger | {pid:write_count,write_length, read_count,read_length} | {06643:00000:00000000:00522:02138112} {06740:00000:00000000:00174:00712704} |
I/O statistics of each process. pid: process id write_count: number of pages the process had written. write_size: written size in bytes. read_count: number of pages the process had read. read_size: read size in bytes. |
CPU Status | cpu:user,nice,system,idle,iowait,ir,softirq |
cpu:48146707,7679908,61820079, 114165927,4175379,0,125657 |
Currently not used. |
每一行表示每秒中cmdq的读写状态,我抓取的随机写入时的信息,因此以这句为例:
[ 739.357804]mmc.q:0.wt:83355,10072064,118.wl:11%,118875025,1014868695,2459.vm:960640,56,1105952,0,1367124,12449....
可以这么解读:
这一秒中之内,写入了10072064字节的数据,写入耗时118ms,占用这一秒中的11%,折算速度为83355KB/s;
而反观正常情况下的信息,明显写入量少了,但是由于写入耗时也少,折算的写入速度是差不多的。
由于wl(workload)的计算方式是写入耗时占这一秒的比例。因此,如果是硬件限制,那么此处的wl应该很高,即,每秒钟都是一直在写,而由于速度很慢,所以写入耗时高;
但是是事实,出现问题时的wl反而比正常情况下地,说明此时压力不再block层,而在起之前;
换句话说,磁盘写入没压力,但是cmdq每秒接受、分配的任务就这么点,所以导致上层显示变慢;
小结:问题出在cmdq以上的部分,而不是硬件器件问题;
下一章会通过ftrace来继续定位问题;
来源:CSDN
作者:Ryan_ZHENG
链接:https://blog.csdn.net/u014175785/article/details/103581096