This commit fixes nearly all of the reported doxygen warnings.
I tried to not clutter the log with removed trailing spaces.
Removed whitespace and converted tab/spaces for all files affected by this commit
are in a separate branch.
After page loading has finished the number of free bytes left for page attributes is logged. It turns out that "usual" pages tend to get along with ~800 bytes while i.e. the Google search pages use all of the 2000 bytes of page attribute memory allocated by default (because of the long URLs with many parameters). So it seems that reducing this default isn't exactly the best way to reduce memory consumption...
CC_FASTCALL was introduced many years ago for the cc65 tool chain. It was never used for another tool chain. With a798b1d648 the cc65 tool chain doesn't need CC_FASTCALL anymore.
When the client has already called webclient_close() it doesn't expect to have webclient_datahandler(NULL, 0) called just because the connection was closed by the server "at the same time". Rather it expects to always have webclient_closed() called.
Calling webclient_datahandler(NULL, 0) instead of webclient_closed() means that the web browser shows "Done" in the status line instead of "Stopped". So the user is mislead to think that he has already seen all of the page.
Note: webclient_close() is called by the client during newdata() so the already existing check for WEBCLIENT_STATE_CLOSE further above doesn't help.
../..//apps/er-coap/er-coap-observe.c:237:15: warning: unused variable
‘content’ [-Wunused-variable]
This was caused by a buffer that was declared, but used only in
commented out code.
The variable was moved to the commented out block.
The block was surrounded by an #if 0 ... #endif to make it easier to
uncomment.
Everything still compiles with the code in question uncommented.
When a client sends a CoAP request with no block2 size,
the default one would be set to REST_MAX_CHUNK_SIZE.
However, this is not guaranteed to be a power of 2.
This can lead to clients receiving a bigger payload than expected as
part of the header, and ending up with duplicated content.
Setting the default to COAP_MAX_BLOCK_SIZE,
which is guaranteed to be a power of 2, fixes this.
The default Telnetd idle timeout of 30 seconds seems somewhat short. Best to have it user-configurable (incl. the option to turn it off with an config value of 0).
In order to have the wget command make some sense the write command should be present too.
- On the Apple][ reduction of the MTU seems to gain just enough RAM to have the (rather heavy-weight) full-blown C library file I/O working.
- On the C128 there's way too little RAM so there's no wget command but only the file commands.
- On the CBMs a dummy lseek() was necessary to have the read command link.
Forms with multiple submit buttons are rather rare but nevertheless the most popular web page (www.google.com) contains one with the two submit buttons "Google Search" and "I'm Feeling Lucky". So we want to support that - incl. the usual feature to the interpret first button as default button used when the user presses the ENTER key.
Script code may contain a '<' as part of a equation. We erroneously interpreted that as start of a tag. Now we check for the very next char to be a '/' as the only tag allowed is the </SCRIPT> tag.
When using the 'down' button on a certain number of lines curently displayed at the bottom of the screen is redisplayed at the top of the screen. Given our usually small screen size and often large pages requiring many 'down' operations the number 'four' seems too generous so lets reduce it to 'two'.
- The wraparound handling when using the history with the 'back' button is actually depending on history_last being unsigned (which is the default for cc65) so define it explicitly as unsigned to make it work on other targets too.
- As there's no 'forward' button it doesn't make sense to keep history entries after using them with the 'back' button. Clearing them on use on the other hand avoids an "infinite history".
Although we for sure don't support HTTPS we need to recognize it. Nowadays it has become pretty usual to redirect HTTP URLs to HTTPS URLs in order to force privacy (thanks, NSA !). So far our redirection handler didn't recognize an HTTPS URL as abslute URLs and therefore appended it to the curent URL. This led to an endless redirection loop. Now we recognize the HTTPS redirection and generate a minimal document on the fly to inform the user of (for us unrachable) the redirection target.
HTML links with HTTPS URLs are treated just like fragment-only links meaning that they get simply completely ignored.
The code to trim spaces from the end of the URL behaved undefined if the URL was empty. That scenario is far from hypothetic as i.e. pressing the 'back' button with no (more) entry in the history yields an empty URL.
The way our HTML parser triggers newlines is a guess at best. On the other hand our screen estate is severely limited. Instead of trying to (further) improve the way we translate tags to newlines it seems more reasonable to simply never render more than two successive empty lines.
We don't handle URLs with fragments exactly ;-) well - meaning we send the fragment part to the server and we don't display the document starting at the anchor tag. Instead of adding the missing capabilities and thus adding lots of code I instead opted to simply ignore fragment-only links. This approach is based on the practical knowledge that fragments are primarily used for intra-document navigation - and are as such fragment-only links. And as we ignore them anyway when displaying the document it's more ergonomic to not have those links in the first place.
Complex script code tends to contain other tags inside strings. As we generally don't parse strings we erroneously interpreted those tags. The easiest workaround is to not interpret tags at all until the </SCRIPT> tag is found.
parse_tag() is called both for attributes inside a tag and the end of the tag itself. For most tags parse_tag() doesn't distinguish both cases. This means that the "tag action" is additionally triggered for every tag attribute. When the tag "action" is setting some state this doesn't hurt. For many tags the "tag action" is to render a newline. Superfluous newlines are sort of acceptable to keep the code as small as possible. However the <li> "tag action" is to render a newline followed by an asterisk - and superfluous asterisks are ugly so we check for <li> if parse_tag() was called for the end of the tag itself.
At the time do_word() is called s.word[s.wordlen] is undefined. So it doesn't make sense to make decisions based on its value - and in fact I don't see why it was necessary/desirable in the first place.
It seems that this implementation of CoAP in Contiki is no longer
maintained in favor of the `er-coap` implementation. This commit
removes the code to prevent confusion and further bit-rot.
Nowadays many HTTP server set cookies which may easily result in HTTP header fields longer than our 'httpheaderline' buffer. It doesn't hurt if we can't parse them but we need to be able to skip them and continue to parse the following header fields.
WWW_CONF_MAX_URLLEN is used as length for the 'editurl' textentry widget. The CTK code for handling that widget uses a single byte so the length can't be > 255. Thus WWW_CONF_MAX_URLLEN can't be > 255 as well.
Currently, the observe value for a response to a GET observe request is always set to zero. That may cause the subsequent notification to have the same observe value. In fact, that happens every time an observable resource is observed for the first time (since the obs_counter is implicitly initialized to zero).
This patch fixes such a problem by setting the observe option value of responses to obs_counter (and then incrementing it).
type process_data_t. This was an artifact when the choice was
made to use the void * type for the data parameter in processes.
Changed parameter 'void * data' of process_post_synch to
process_data_t for consistency.
Checked all the uses of process_start() in contiki and fixed casts
of the data parameter.