New user's registration have been closed due to high spamming and low trafic on this forum. Please contact forum admins directly if you need an account. Thanks !
[SOLVED] [Old] WD Green Power drives may kill themselves !!
I am not sure if this will work, because I am not sure if the messages file actually just lies on the disk or it is (partly) cached in RAM, in which case no disk access is needed and the disk could go to idle anyway.
When I get home I will give your script a try, if it works you will see that the Power_Cycle_Count stops increasing.
You could check the counter, wait a few minutes check again to see how quick the counter increases. ( Of cause without your Bubba being under load or something, because in this case the disk won't idle and the Power_Cycle_Count won't occur )
Then run the script of whilbone and check if the counter still increases. If it stops, the script works, otherwise try reading an other file.
But I thing this is a quite brutal way of avoiding the problem and you must check if the temperature of the disk stays low because you are continuously "stressing" the disk !
I am still hoping for a response from the Bubba team and I hope they can provide a clean solution, because otherwise I fear a lot of Bubba Two's will get in trouble after running half a year or more, which would put a very bad light on such a nice product !!
When I get home I will give your script a try, if it works you will see that the Power_Cycle_Count stops increasing.
You could check the counter, wait a few minutes check again to see how quick the counter increases. ( Of cause without your Bubba being under load or something, because in this case the disk won't idle and the Power_Cycle_Count won't occur )
Then run the script of whilbone and check if the counter still increases. If it stops, the script works, otherwise try reading an other file.
But I thing this is a quite brutal way of avoiding the problem and you must check if the temperature of the disk stays low because you are continuously "stressing" the disk !
I am still hoping for a response from the Bubba team and I hope they can provide a clean solution, because otherwise I fear a lot of Bubba Two's will get in trouble after running half a year or more, which would put a very bad light on such a nice product !!
OK, many thanks for your investigation! Think it's time to backup the B2 now...Ton wrote:The model numbers are not the only affected models, it are just the few models for which WD has provided a firmware update.
My WD10EACS ( 1 TB WD Green ) is definitely affected, but the just don't have released a new firmware which changes the idle-3 timer and therefore not in the list.
In fact not just an WD disks are affected but probably also disks from other manufacturers. All disks with aggressive spin down / head parking strategies ( to save power / heat / noise ) are possible victims of this problem when the OS keeps waking the disks up just after putting them in idle.
So the best solution would be to get the linux on Bubba tuned in such a way that it avoids waking up the disk just after idle, this would be a solution for ALL disks using these idle features. Meaning either wait long between the disk accesses, saving energy and heat or use very short disk interval ( below 7 seconds ), avoiding the disk going to idle, costing more energy and heat . . .
Code: Select all
Thu Feb 5 23:23:47 CET 2009
Device Model: WDC WD5000AACS-00ZUB0
9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 2907
193 Load_Cycle_Count 0x0032 191 191 000 Old_age Always - 27583
194 Temperature_Celsius 0x0022 106 092 000 Old_age Always - 41
Code: Select all
iostat -d -k -p /dev/sda 5 60
-
- Posts: 2
- Joined: 25 Sep 2008, 19:30
My results:
Seems to be okay?
Code: Select all
Device Model: WDC WD10EACS-65D6B0
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3257
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 13
194 Temperature_Celsius 0x0022 112 104 000 Old_age Always - 38
My results are BAD...248878 loadcycle. ;-(
Now I'm torrenting (normaly not much) to keep it a live...
Please Excito be quick for a good solution...
Puma
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Feb 6 00:07:20 2009 from 192.168.101.66
hdroogers@bubba:~$ su
Password:
bubba:/home/hdroogers# smartctl -d ata -a /dev/sda
smartctl version 5.36 [powerpc-unknown-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Device Model: WDC WD5000ABPS-01ZZB0
Serial Number: WD-WCASU5232253
Firmware Version: 02.01B01
User Capacity: 500,107,862,016 bytes
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Fri Feb 6 11:57:17 2009 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (13980) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 163) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0003 168 168 021 Pre-fail Always - 4558
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 23
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x000e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3006
10 Spin_Retry_Count 0x0012 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 8
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 1
193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 248878
194 Temperature_Celsius 0x0022 099 095 000 Old_age Always - 48
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Now I'm torrenting (normaly not much) to keep it a live...
Please Excito be quick for a good solution...
Puma
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Fri Feb 6 00:07:20 2009 from 192.168.101.66
hdroogers@bubba:~$ su
Password:
bubba:/home/hdroogers# smartctl -d ata -a /dev/sda
smartctl version 5.36 [powerpc-unknown-linux-gnu] Copyright (C) 2002-6 Bruce Allen
Home page is http://smartmontools.sourceforge.net/
=== START OF INFORMATION SECTION ===
Device Model: WDC WD5000ABPS-01ZZB0
Serial Number: WD-WCASU5232253
Firmware Version: 02.01B01
User Capacity: 500,107,862,016 bytes
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
Local Time is: Fri Feb 6 11:57:17 2009 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x84) Offline data collection activity
was suspended by an interrupting command from host.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (13980) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 163) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0003 168 168 021 Pre-fail Always - 4558
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 23
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x000e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3006
10 Spin_Retry_Count 0x0012 100 253 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0012 100 253 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 8
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 1
193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 248878
194 Temperature_Celsius 0x0022 099 095 000 Old_age Always - 48
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
No self-tests have been logged. [To run self-tests, use: smartctl -t]
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Have tested whilbone's testscript ( see below ) tonight, and as hoped it stopped the disk from going to idle 3 and in this way avoided the load cycles and the tempature was also stable, so the workaround works for the time beeing . . .
I am happy the Excito teams takes the isue serious and works on a solution, because they are probably the ones with the most insight in the system.
Code: Select all
while [ true ];do
sleep 5
tail /var/log/messages >NUL
done
Oeps, appearantly mine is over 300000 but sofar i did not encountered any problems.
Device Model: WDC WD1000FYPS-01ZKB0
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3106
193 Load_Cycle_Count 0x0032 089 089 000 Old_age Always - 335934
194 Temperature_Celsius 0x0022 114 110 000 Old_age Always - 38
Martin
Device Model: WDC WD1000FYPS-01ZKB0
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3106
193 Load_Cycle_Count 0x0032 089 089 000 Old_age Always - 335934
194 Temperature_Celsius 0x0022 114 110 000 Old_age Always - 38
Martin
-
- Posts: 88
- Joined: 26 Sep 2008, 04:18
Ops, I'm moving in the direction of those 300' load cycles also...
Code: Select all
smartctl -d ata -a /dev/sda | grep -i -E '(load_cycle|temp|Power_On_Hours)'
9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3042
193 Load_Cycle_Count 0x0032 117 117 000 Old_age Always - 249270
194 Temperature_Celsius 0x0022 108 104 000 Old_age Always - 44
smartctl -d ata -a /dev/sda
Device Model: WDC WD10EACS-00ZJB0
Serial Number: WD-WCASJ1751130
Firmware Version: 01.01B01
Last edited by MagnusJonsson on 08 Feb 2009, 16:25, edited 2 times in total.
this is a big problem. mijn gives:
Hoop that the excito team hurry.
Rewien
i'm gonne try to clone my disk until this is fixed, or shut the server down.Device Model: WDC WD5000AACS-00ZUB0
Serial Number: WD-WCASU4837669
Firmware Version: 01.01B01
User Capacity: 500,107,862,016 bytes
193 Load_Cycle_Count 0x0032 134 134 000 Old_age Always - 200112
Hoop that the excito team hurry.
Rewien
Last edited by rewien on 07 Feb 2009, 13:40, edited 1 time in total.
Hi again all,
I have now contacted WD and hope to get some good answers from them soon. Apparenlty they have released fixes for other models than the ones we're using, perhaps there are fixes underway for these as well.
To calm you down just a little bit, I've read reports of users having serveral million unload cycle counts on their disks without failing, but of course, we can not rely on that.
Also, I have been told that this issue is solved with all drives manufactured after 2008-12-01 (they supposedly have set the unload timer to a higher value), but I have asked WD this to get this confirmed.
Furthermore, there is the issue of what to do to get our system to either not hit the drive so often, or hit it often enough to make the unload never occur. It seems that some systems do not have this issue at all - my test units here fx:
unit #1:
WDC WD10EACS-65D6B0
power on hours: 3214
load cycle count: 19
unit #2:
WDC WD10EACS-65D6B0
power on hours: 3229
load cycle count: 37
[EDIT]: Information from a post in a thread from the Synlogy forum: "It seems that disks with version WD10EACS-00D6B0 (Nov 08) or later no longer show high LCC. Instead they show the START_STOP_COUNTER value as LCC. Question remains, if WD did really fix the problem or just mask it away by no longer showing the high LCCvalues to us."
[/EDIT]
Also several users in this thread reported similar figures.
We are investigating this in parallel, but if anyone has any ideas of why this issue only occurs with some setups, and what to do to keep the load cycle count stable, we are very interested. We do not have any setups here that have an increasing load cycle count so we can not test that currently.
I'll let you know as soon as I find something more out. Until then; thanks for helping out on this.
I have now contacted WD and hope to get some good answers from them soon. Apparenlty they have released fixes for other models than the ones we're using, perhaps there are fixes underway for these as well.
To calm you down just a little bit, I've read reports of users having serveral million unload cycle counts on their disks without failing, but of course, we can not rely on that.
Also, I have been told that this issue is solved with all drives manufactured after 2008-12-01 (they supposedly have set the unload timer to a higher value), but I have asked WD this to get this confirmed.
Furthermore, there is the issue of what to do to get our system to either not hit the drive so often, or hit it often enough to make the unload never occur. It seems that some systems do not have this issue at all - my test units here fx:
unit #1:
WDC WD10EACS-65D6B0
power on hours: 3214
load cycle count: 19
unit #2:
WDC WD10EACS-65D6B0
power on hours: 3229
load cycle count: 37
[EDIT]: Information from a post in a thread from the Synlogy forum: "It seems that disks with version WD10EACS-00D6B0 (Nov 08) or later no longer show high LCC. Instead they show the START_STOP_COUNTER value as LCC. Question remains, if WD did really fix the problem or just mask it away by no longer showing the high LCCvalues to us."
[/EDIT]
Also several users in this thread reported similar figures.
We are investigating this in parallel, but if anyone has any ideas of why this issue only occurs with some setups, and what to do to keep the load cycle count stable, we are very interested. We do not have any setups here that have an increasing load cycle count so we can not test that currently.
I'll let you know as soon as I find something more out. Until then; thanks for helping out on this.
/Johannes (Excito co-founder a long time ago, but now I'm just Johannes)
First of all I'd like to thank you Johannes for spending time here in the forum on a Saturday afternoon, it is much appreciated.
Second, I'd like to confirm the numbers from my drive.
Device Model: WDC WD10EACS-00D6B0
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 15
9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 2604
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 15
My LCC is the same as the SSC which confirm the fact that WD might be masking the issue and now I don't even know if I have an issue or not... I might just keep running my 5 second keep-alive script just in case.
/WhilBone
Second, I'd like to confirm the numbers from my drive.
Device Model: WDC WD10EACS-00D6B0
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 15
9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 2604
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 15
My LCC is the same as the SSC which confirm the fact that WD might be masking the issue and now I don't even know if I have an issue or not... I might just keep running my 5 second keep-alive script just in case.
/WhilBone
whilbone and others:
Yes, I just did a small experiment: Assuming that WD just masked the Load cycle count away in their smart output, the actual unload cycles should be audible, right?
So I put my ear to the disk and waited, and there actually was a click (boink) every few seconds. Can anyone confirm this? Perhaps users with an increasing load count can match the sounds to the actual load count?
I also tried accessing the disk every few seconds and yes, this removed the boink, but instead there is disk access activity every few seconds. Don't know if that is any better.
Perhaps it's best to await the response from WD, but if anyone else has some input meanwhile, I'd be happy. Again, thanks for your patience and your help. I'll continue trying with more setups during the next few days.
Yes, I just did a small experiment: Assuming that WD just masked the Load cycle count away in their smart output, the actual unload cycles should be audible, right?
So I put my ear to the disk and waited, and there actually was a click (boink) every few seconds. Can anyone confirm this? Perhaps users with an increasing load count can match the sounds to the actual load count?
I also tried accessing the disk every few seconds and yes, this removed the boink, but instead there is disk access activity every few seconds. Don't know if that is any better.
Perhaps it's best to await the response from WD, but if anyone else has some input meanwhile, I'd be happy. Again, thanks for your patience and your help. I'll continue trying with more setups during the next few days.
/Johannes (Excito co-founder a long time ago, but now I'm just Johannes)