littlenanner85
Member
OK I am a newb in this logcat business. Do you want me to post it on here? If so it says it is to long
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'm getting the same issue with WIFI, but I haven't had a chance to trace it down. It appears that when the phone goes into sleep mode, the wifi triggers the same reboot as having the camera on. I ended up having to do a e2fsck on all my partitions as I started having issues with /data around the 6th reboot.Odd as I don't get any reboot even when dirty flashing from other CM10.2 nightlies. I just wipe Dalvik cache, cache, system, and then flash new nightly build. Been doin that for last 5-6 builds. I then flash Gapps 8/13/13 jb4.3, Crossbreeder zip to optimize my phone
Hey there HUEYT, what version did you start with before you started dirty flashing? Which version did you install first? I'd like to give it a try myself, and I've been reading the posts, but lost track. Things seem to be going well for you, so I want to install the same way you did. Thanks.10/29 was dirty flashed this morning and is all good here
I get occasional lock screen issues. I use a pin to unlock, and sometimes the keypad is invisible, or jumbled.
If it's invisible I can guess the location sometimes and unlock, otherwise I have to reboot.
I don't know if its been brought up already in this thread but I've been trying to figure out why it seems my battery has been lackluster and why it seems to runaway sometimes. I believe that the most noticeable issue is the WiFi.
I've noticed that it has had the most battery draw @ 18% - 20% daily but I finally got some good evidence I found today is that I had the WiFi turned of through the quick toggle for most of the day and it still had this amount of battery draw going from 100% battery to 6%battery so the drain caused by WiFi should be in relatively the same. The of thing as well it's that the WiFi bar in the battery menu shows solid as if it had been active all day.
Any thoughts?
QSEECOM: qseecom_release: data->released == false
61999.305603] SysRq : Show Blocked State
[61999.308868] task PC stack pid father
[61999.314056] kthreadd D c08894a4 0 2 0 0x00000000
[61999.320465] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc)
[61999.330200] [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc) from [<c088aa40>] (mutex_lock+0x20/0x40)
[61999.339019] [<c088aa40>] (mutex_lock+0x20/0x40) from [<c01c1c98>] (get_online_cpus+0x2c/0x48)
[61999.347961] [<c01c1c98>] (get_online_cpus+0x2c/0x48) from [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8)
[61999.357330] [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8) from [<c0272b24>] (compact_nodes+0x10/0x80)
[61999.366546] [<c0272b24>] (compact_nodes+0x10/0x80) from [<c06929c0>] (lowmem_shrink+0x47c/0x4c0)
[61999.375274] [<c06929c0>] (lowmem_shrink+0x47c/0x4c0) from [<c024d814>] (shrink_slab+0x104/0x1b8)
[61999.383605] [<c024d814>] (shrink_slab+0x104/0x1b8) from [<c024e5a0>] (try_to_free_pages+0x288/0x48c)
[61999.393188] [<c024e5a0>] (try_to_free_pages+0x288/0x48c) from [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac)
[61999.403259] [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac) from [<c01bd9c4>] (copy_process+0xcc/0x11d8)
[61999.412872] [<c01bd9c4>] (copy_process+0xcc/0x11d8) from [<c01bebd0>] (do_fork+0x100/0x2f4)
[61999.421203] [<c01bebd0>] (do_fork+0x100/0x2f4) from [<c010714c>] (kernel_thread+0x70/0x80)
[61999.429016] [<c010714c>] (kernel_thread+0x70/0x80) from [<c01e32c0>] (kthreadd+0xec/0x19c)
[61999.437683] [<c01e32c0>] (kthreadd+0xec/0x19c) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.446166] kworker/u:0 D c08894a4 0 5 2 0x00000000
[61999.452148] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0123140>] (worker+0xf0/0x1e4)
[61999.460510] [<c0123140>] (worker+0xf0/0x1e4) from [<c01dbd38>] (process_one_work+0x308/0x514)
[61999.469024] [<c01dbd38>] (process_one_work+0x308/0x514) from [<c01dc630>] (worker_thread+0x2a8/0x4a0)
[61999.478210] [<c01dc630>] (worker_thread+0x2a8/0x4a0) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.486053] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.494354] kworker/u:1 D c08894a4 0 34 2 0x00000000
[61999.500732] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0141614>] (rr_read+0x1e8/0x214)
[61999.509307] [<c0141614>] (rr_read+0x1e8/0x214) from [<c0141864>] (do_read_data+0x20/0x15c0)
[61999.517211] [<c0141864>] (do_read_data+0x20/0x15c0) from [<c01dbd38>] (process_one_work+0x308/0x514)
[61999.526733] [<c01dbd38>] (process_one_work+0x308/0x514) from [<c01dc630>] (worker_thread+0x2a8/0x4a0)
[61999.535949] [<c01dc630>] (worker_thread+0x2a8/0x4a0) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.544189] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.552093] kswapd0 D c08894a4 0 41 2 0x00000000
[61999.558441] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc)
[61999.568237] [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc) from [<c088aa40>] (mutex_lock+0x20/0x40)
[61999.577453] [<c088aa40>] (mutex_lock+0x20/0x40) from [<c01c1c98>] (get_online_cpus+0x2c/0x48)
[61999.585540] [<c01c1c98>] (get_online_cpus+0x2c/0x48) from [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8)
[61999.595336] [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8) from [<c0272b24>] (compact_nodes+0x10/0x80)
[61999.604553] [<c0272b24>] (compact_nodes+0x10/0x80) from [<c06929c0>] (lowmem_shrink+0x47c/0x4c0)
[61999.613281] [<c06929c0>] (lowmem_shrink+0x47c/0x4c0) from [<c024d814>] (shrink_slab+0x104/0x1b8)
[61999.622039] [<c024d814>] (shrink_slab+0x104/0x1b8) from [<c024df48>] (kswapd+0x680/0xa50)
[61999.629791] [<c024df48>] (kswapd+0x680/0xa50) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.637878] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.646179] ksmd D c08894a4 0 42 2 0x00000000
[61999.652099] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc)
[61999.661926] [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc) from [<c088aa40>] (mutex_lock+0x20/0x40)
[61999.671112] [<c088aa40>] (mutex_lock+0x20/0x40) from [<c01c1c98>] (get_online_cpus+0x2c/0x48)
[61999.679626] [<c01c1c98>] (get_online_cpus+0x2c/0x48) from [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8)
[61999.689025] [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8) from [<c0273f64>] (ksm_scan_thread+0xa0/0xd28)
[61999.698028] [<c0273f64>] (ksm_scan_thread+0xa0/0xd28) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.706787] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.715118] kinteractiveup D c08894a4 0 101 2 0x00000000
[61999.721038] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c01e33d8>] (kthread+0x68/0x88)
[61999.729431] [<c01e33d8>] (kthread+0x68/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.737823] zygote D c08894a4 0 194 1 0x00000001
[61999.743713] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc)
[61999.753509] [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc) from [<c088aa40>] (mutex_lock+0x20/0x40)
[61999.762695] [<c088aa40>] (mutex_lock+0x20/0x40) from [<c01c1c98>] (get_online_cpus+0x2c/0x48)
[61999.771209] [<c01c1c98>] (get_online_cpus+0x2c/0x48) from [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8)
[61999.780609] [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8) from [<c0272b24>] (compact_nodes+0x10/0x80)
[61999.789794] [<c0272b24>] (compact_nodes+0x10/0x80) from [<c06929c0>] (lowmem_shrink+0x47c/0x4c0)
[61999.798156] [<c06929c0>] (lowmem_shrink+0x47c/0x4c0) from [<c024d814>] (shrink_slab+0x104/0x1b8)
[61999.807312] [<c024d814>] (shrink_slab+0x104/0x1b8) from [<c024e5a0>] (try_to_free_pages+0x288/0x48c)
[61999.816467] [<c024e5a0>] (try_to_free_pages+0x288/0x48c) from [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac)
[61999.826538] [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac) from [<c0244354>] (__get_free_pages+0x10/0x24)
[61999.836334] [<c0244354>] (__get_free_pages+0x10/0x24) from [<c0112d88>] (pgd_alloc+0x14/0xe0)
[61999.844421] [<c0112d88>] (pgd_alloc+0x14/0xe0) from [<c01bcfb4>] (mm_init+0xa0/0xdc)
[61999.852539] [<c01bcfb4>] (mm_init+0xa0/0xdc) from [<c01bd390>] (dup_mm+0x64/0x530)
[61999.860107] [<c01bd390>] (dup_mm+0x64/0x530) from [<c01be200>] (copy_process+0x908/0x11d8)
[61999.868347] [<c01be200>] (copy_process+0x908/0x11d8) from [<c01bebd0>] (do_fork+0x100/0x2f4)
[61999.876373] [<c01bebd0>] (do_fork+0x100/0x2f4) from [<c01096e8>] (sys_fork+0x28/0x2c)
[61999.884582] [<c01096e8>] (sys_fork+0x28/0x2c) from [<c0105dc0>] (ret_fast_syscall+0x0/0x30)
[61999.892944] kworker/u:3 D c08894a4 0 627 2 0x00000000
[61999.898864] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0123140>] (worker+0xf0/0x1e4)
[61999.907226] [<c0123140>] (worker+0xf0/0x1e4) from [<c01dbd38>] (process_one_work+0x308/0x514)
[61999.915771] [<c01dbd38>] (process_one_work+0x308/0x514) from [<c01dc630>] (worker_thread+0x2a8/0x4a0)
[61999.924957] [<c01dc630>] (worker_thread+0x2a8/0x4a0) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.932769] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.941497] krmt_storagecln D c08894a4 0 668 2 0x00000000
[61999.947875] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c014479c>] (__msm_rpc_read+0x22c/0xdb8)
[61999.956207] [<c014479c>] (__msm_rpc_read+0x22c/0xdb8) from [<c0145338>] (msm_rpc_read+0x10/0x78)
[61999.965423] [<c0145338>] (msm_rpc_read+0x10/0x78) from [<c0147b14>] (rpc_clients_thread+0x54/0x208)
[61999.974426] [<c0147b14>] (rpc_clients_thread+0x54/0x208) from [<c01e33f0>] (kthread+0x80/0x88)
[61999.983032] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[61999.990905] krmt_storagecln D c08894a4 0 669 2 0x00000000
[61999.997253] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c01479a0>] (rpc_clients_cb_thread+0x84/0x1a4)
[62000.006958] [<c01479a0>] (rpc_clients_cb_thread+0x84/0x1a4) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.015838] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.023712] krmt_storagecln D c08894a4 0 769 2 0x00000000
[62000.030090] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c014479c>] (__msm_rpc_read+0x22c/0xdb8)
[62000.039306] [<c014479c>] (__msm_rpc_read+0x22c/0xdb8) from [<c0145338>] (msm_rpc_read+0x10/0x78)
[62000.048034] [<c0145338>] (msm_rpc_read+0x10/0x78) from [<c0147b14>] (rpc_clients_thread+0x54/0x208)
[62000.056640] [<c0147b14>] (rpc_clients_thread+0x54/0x208) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.065643] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.073944] krmt_storagecln D c08894a4 0 770 2 0x00000000
[62000.079925] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c01479a0>] (rpc_clients_cb_thread+0x84/0x1a4)
[62000.089630] [<c01479a0>] (rpc_clients_cb_thread+0x84/0x1a4) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.098449] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.106842] koemrapiclientc D c08894a4 0 890 2 0x00000000
[62000.112731] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c014479c>] (__msm_rpc_read+0x22c/0xdb8)
[62000.121917] [<c014479c>] (__msm_rpc_read+0x22c/0xdb8) from [<c0145338>] (msm_rpc_read+0x10/0x78)
[62000.130706] [<c0145338>] (msm_rpc_read+0x10/0x78) from [<c0147b14>] (rpc_clients_thread+0x54/0x208)
[62000.139709] [<c0147b14>] (rpc_clients_thread+0x54/0x208) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.147888] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.156738] mpdecision D c08894a4 0 1687 1 0x00000000
[62000.162597] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0889fe8>] (schedule_timeout+0x1c/0x3cc)
[62000.171875] [<c0889fe8>] (schedule_timeout+0x1c/0x3cc) from [<c0889940>] (wait_for_common+0x17c/0x290)
[62000.180755] [<c0889940>] (wait_for_common+0x17c/0x290) from [<c01e36c0>] (kthread_create_on_node+0xec/0x154)
[62000.190979] [<c01e36c0>] (kthread_create_on_node+0xec/0x154) from [<c0875214>] (workqueue_cpu_callback+0x60/0x324)
[62000.201324] [<c0875214>] (workqueue_cpu_callback+0x60/0x324) from [<c088e74c>] (notifier_call_chain+0x2c/0x70)
[62000.211273] [<c088e74c>] (notifier_call_chain+0x2c/0x70) from [<c01c1c10>] (__cpu_notify+0x24/0x3c)
[62000.220275] [<c01c1c10>] (__cpu_notify+0x24/0x3c) from [<c087182c>] (_cpu_down+0x88/0x284)
[62000.228515] [<c087182c>] (_cpu_down+0x88/0x284) from [<c0871a50>] (cpu_down+0x28/0x3c)
[62000.236022] [<c0871a50>] (cpu_down+0x28/0x3c) from [<c08721b4>] (store_online+0x34/0x78)
[62000.244506] [<c08721b4>] (store_online+0x34/0x78) from [<c04e79c0>] (sysdev_store+0x1c/0x20)
[62000.252960] [<c04e79c0>] (sysdev_store+0x1c/0x20) from [<c02dd480>] (sysfs_write_file+0x108/0x13c)
[62000.261901] [<c02dd480>] (sysfs_write_file+0x108/0x13c) from [<c027e5e8>] (vfs_write+0xac/0x134)
[62000.270233] [<c027e5e8>] (vfs_write+0xac/0x134) from [<c027e71c>] (sys_write+0x3c/0x68)
[62000.278625] [<c027e71c>] (sys_write+0x3c/0x68) from [<c0105dc0>] (ret_fast_syscall+0x0/0x30)
[62000.287139] kworker/u:4 D c08894a4 0 19457 2 0x00000000
[62000.292999] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0141614>] (rr_read+0x1e8/0x214)
[62000.301574] [<c0141614>] (rr_read+0x1e8/0x214) from [<c0141864>] (do_read_data+0x20/0x15c0)
[62000.309906] [<c0141864>] (do_read_data+0x20/0x15c0) from [<c01dbd38>] (process_one_work+0x308/0x514)
[62000.319000] [<c01dbd38>] (process_one_work+0x308/0x514) from [<c01dc630>] (worker_thread+0x2a8/0x4a0)
[62000.327819] [<c01dc630>] (worker_thread+0x2a8/0x4a0) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.336456] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.344787] kworker/0:0 D c08894a4 0 22073 2 0x00000000
[62000.351104] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c0889fe8>] (schedule_timeout+0x1c/0x3cc)
[62000.359588] [<c0889fe8>] (schedule_timeout+0x1c/0x3cc) from [<c0889940>] (wait_for_common+0x17c/0x290)
[62000.369262] [<c0889940>] (wait_for_common+0x17c/0x290) from [<c01e36c0>] (kthread_create_on_node+0xec/0x154)
[62000.379089] [<c01e36c0>] (kthread_create_on_node+0xec/0x154) from [<c01db608>] (create_worker+0x1d4/0x2f0)
[62000.388702] [<c01db608>] (create_worker+0x1d4/0x2f0) from [<c01dc244>] (manage_workers+0x128/0x26c)
[62000.397735] [<c01dc244>] (manage_workers+0x128/0x26c) from [<c01dc564>] (worker_thread+0x1dc/0x4a0)
[62000.406341] [<c01dc564>] (worker_thread+0x1dc/0x4a0) from [<c01e33f0>] (kthread+0x80/0x88)
[62000.415008] [<c01e33f0>] (kthread+0x80/0x88) from [<c0106f38>] (kernel_thread_exit+0x0/0x8)
[62000.423370] zygote D c08894a4 0 22774 194 0x00000001
[62000.429687] [<c08894a4>] (__schedule+0x7c0/0x988) from [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc)
[62000.438659] [<c088a924>] (__mutex_lock_slowpath+0x1d0/0x2cc) from [<c088aa40>] (mutex_lock+0x20/0x40)
[62000.448272] [<c088aa40>] (mutex_lock+0x20/0x40) from [<c01c1c98>] (get_online_cpus+0x2c/0x48)
[62000.456787] [<c01c1c98>] (get_online_cpus+0x2c/0x48) from [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8)
[62000.466156] [<c01de8f0>] (schedule_on_each_cpu+0x24/0xe8) from [<c0272b24>] (compact_nodes+0x10/0x80)
[62000.475372] [<c0272b24>] (compact_nodes+0x10/0x80) from [<c06929c0>] (lowmem_shrink+0x47c/0x4c0)
[62000.483734] [<c06929c0>] (lowmem_shrink+0x47c/0x4c0) from [<c024d814>] (shrink_slab+0x104/0x1b8)
[62000.492889] [<c024d814>] (shrink_slab+0x104/0x1b8) from [<c024e5a0>] (try_to_free_pages+0x288/0x48c)
[62000.502014] [<c024e5a0>] (try_to_free_pages+0x288/0x48c) from [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac)
[62000.512084] [<c024401c>] (__alloc_pages_nodemask+0x3d4/0x6ac) from [<c01bd9c4>] (copy_process+0xcc/0x11d8)
[62000.521728] [<c01bd9c4>] (copy_process+0xcc/0x11d8) from [<c01bebd0>] (do_fork+0x100/0x2f4)
[62000.529632] [<c01bebd0>] (do_fork+0x100/0x2f4) from [<c0105dc0>] (ret_fast_syscall+0x0/0x30)
[62000.550445] QSEECOM: qseecom_release: data->released == false
[62000.727111] binder: release 840:840 transaction 1553609 in, still active
[62000.732849] binder: send failed reply for transaction 1553609 to 22730:22818
[62000.740203] binder: release 840:1527 transaction 1553604 in, still active
[62000.746795] binder: send failed reply for transaction 1553604 to 22799:22814
[62000.753692] binder: release 840:7111 transaction 1553608 in, still active
[62000.760589] binder: send failed reply for transaction 1553608 to 22799:22815
[62000.767517] binder: release 840:17479 transaction 1553614 in, still active
[62000.774444] binder: send failed reply for transaction 1553614 to 22231:22255
[62000.786437] binder: 22799:22815 transaction failed 29189, size 180-0
[62001.715759] alarm_release: clear alarm, pending 0
[62001.720397] alarm_release: clear alarm, pending 0
[62001.725006] alarm_release: clear pending alarms 6
[62001.856414] binder: 192:336 transaction failed 29189, size 84-0
[62002.859710] binder: 22730:22771 transaction failed 29189, size 260-0
[62003.817230] QSEECOM: qseecom_release: data->released == false
[62004.965148] binder: 22730:22771 transaction failed 29189, size 76-0
[62005.165924] binder: 22730:22746 transaction failed 29189, size 144-0
[62005.175384] binder: 22730:22771 transaction failed 29189, size 84-4
[62005.182617] binder: 22730:22771 transaction failed 29189, size 80-4
[62005.190032] binder: 22730:22768 transaction failed 29189, size 180-0
I get a reboot just about every time I connect to my home Wi-Fi network. Sometimes it'll reboot several times in a row as it connects to the Wi-Fi network after booting. This has occurred on every 10.2 nightly through 10/26; I have not yet tried the 10/28 nightly.
I'll try to grab the last_kmsg tonight.
So I finally got around to grabbing the last kmsg from when this Wi-Fi connect and reboot occurs. Here's the link:Last Kmsg - WiFi Reboot - Pastebin.com
To recreate this, I enabled Wi-Fi while in range of my home network. It connected for about a minute or so, then it seemed to drop out, reconnect, and then my phone frooze and rebooted. I'm currently on the 10/28 nightly, but this has happened on every previous 10.2 nightly. If I leave Wi-Fi disabled at home, I don't get any reboots.
Hopefully this helps!
M1 is building now on jenkins / get.cm. Looks like it's up to d2spr so far (come on "V"s...)M1 is in prep stages, should be out in a few days
PS: Looks like they either killed the 11/01 nightlies in progress before they got to us (finished toroplus) or the M snapshots trump nightlies on priority... depends on how they've got the build server configured. In either case, check carefully. We may get two new .zips within the next few hours and they may show up out of order depending on what they committed into the M1 build.M1 is building now on jenkins / get.cm. Looks like it's up to d2spr so far (come on "V"s...)