UMTS (and other) connection optimizations

asked 2014-02-02 22:41:34 +0300

attah gravatar image

updated 2015-01-11 20:13:11 +0300

Warning: technical content!

A new OS with complete control over (or at least access to) is a unique possibility. And given this i want to contribute a few ideas...

I did my bsc thesis on power consumption characteristics for UMTS and room for improvement in these areas. Shorter version (research paper) here: http://liu.diva-portal.org/smash/get/diva2:432501/FULLTEXT01 Extra interesting here will be figures 4 and 5 illustrating linger timers on different operators, and the semi-related response times.

So yes, this will focus on UMTS. However i have since come to more insights; for example the unintended effects of not only power consumption but experienced performance that quirks of protocols that are not connection characteristics aware. And these may apply to other techniques as well. One such example could be TCP Nagle and delayed ACK interaction that whilst annoying in the normal case gets it's negative effects amplified by an unintended downswitch to common (FACH).

So i propose to you to look at changes and optimizations in these general areas.

  • The ip heartbeat thing is probably quite good, but it could have some things to gain from traffic shaping as well. Example: polling for email is very low-data, if this is shaped to < 10KBps  (<<32KBps anyway) it will "fit" on FACH and not only be less power hungry while transferring the data, but also consume less energy since the UE will not linger in DCH. It may still be prudent to release the shaping if there are any emails to fetch.

  • Use of fast dormancy. This is probably only necessary for networks that do not have URA_PCH (technical term: shitty networks). Correct use of this would cut down on unnecessary time spent in connected mode wasting energy (lack of URA_PCH usually goes together with long downswitch timers in order to achieve some degree of responsivity). Too aggressive use of this would severely impact the user experience (i.e. huge response times since we drop to idle). Tread lightly, but I'm sure there are significant gains to be made. (Maybe at a ip heartbeat event that requires DCH use, since we can be pretty sure that we are done with the connection when all applications have done what they wanted).

  • Move connection (energy) cost and characteristics closer to userland. (Deliberately fluffy). If some coordinating middle layer would know the cost of connections, we would no longer be flying blind when trying to optimize. People on good networks would also not have to suffer from optimizations made for worse networks. Applications would input their desired use of the network, how often, how much, degree of user importance etcetc and some general sanity limits for what it thinks is okay to consume. So then i could set my e-mail client to "poll every 30 minutes, but use at most 5% of my battery per day" or "poll as much as you can get away with for 2% of my battery". And this would serve as input on rates, persistence, scheduling together with other access events (piggybacking), possible FACH shaping and so on.


Misc other stuff:

  • Possibly tie in network timers to upper limit of TCP retransmission timer to avoid downswitch for a connection that should be active.

  • For really bad networks (long connection time/ lack of URA_PCH), keep us in cell FACH by sending a tiny amount of nonsense data if it is likely that the user is about to load (for example) another webpage.

  • (2015-01-11) For LTE it might be useful in some cases to keep the Time alignment Timer from expiring (by similarly to the last point sending small packets of nonsense data) and thus avoiding to have to do a random access procedure. Only for certain very interactive things, such as web browsing, of course. And since LTE is so dynamic the power consumption penalty will be negligible or you might even gain slightly from it compared to a RA.


Much of this should preferably be done with feedback from what RRC state we are in, i.e. any additional access while already on DCH costs pretty much nothing, if we are shaping for FACH and get DCH anyway we might as well just go full out in rate. Knowing the RRC state also enables us to deduce the downswitch timers, and see if URA_PCH is present or not. Unfortunately AT+CSCON doesn't work.. but maybe there are other interfaces to this.

Please let me know what you think.


UPDATE: I should give (more) credit to this guy: http://mobilesociety.typepad.com/mobile_life/2010/06/umts-state-switching-and-fast-dormancy-evolution.html He makes these concepts really easy to understand and has great insights. There are more posts on the subject on his blog - so by all means, click around and find more interesting things :)

edit retag flag offensive close delete