mirror of
git://projects.qi-hardware.com/wernermisc.git
synced 2024-11-15 06:29:22 +02:00
m1/patches/rtems/: pending RTEMS patches (relative to latest CVS)
This commit is contained in:
parent
20283fe9c2
commit
aa0022d973
44
m1/patches/rtems/chain-first-last.patch
Normal file
44
m1/patches/rtems/chain-first-last.patch
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
This issue is under consideration:
|
||||||
|
http://www.rtems.org/pipermail/rtems-users/2011-November/009130.html
|
||||||
|
|
||||||
|
Doubly-linked lists ("chains") in RTEMS have a "control" block that
|
||||||
|
looks like the next/prev link pair in an element. The list elements
|
||||||
|
link both ways to this control block.
|
||||||
|
|
||||||
|
_Chain_Is_first and _Chain_Is_last only probed if the link to the
|
||||||
|
next element - which would be the control block - is non-NULL.
|
||||||
|
Telling by the function description and given that there are already
|
||||||
|
functions called _Chain_Is_head and _Chain_Is_tail (which could be
|
||||||
|
simplified), this is probably not the intended behaviour.
|
||||||
|
|
||||||
|
This also affects the aliases rtems_chain_is_first and
|
||||||
|
rtems_chain_is_last.
|
||||||
|
|
||||||
|
These functions are not used a lot and I haven't seen any immediate
|
||||||
|
effect on M1 after changing them, so I can't say whether this patch
|
||||||
|
may unearth other problems.
|
||||||
|
|
||||||
|
- Werner
|
||||||
|
|
||||||
|
Index: rtems/cpukit/score/inline/rtems/score/chain.inl
|
||||||
|
===================================================================
|
||||||
|
--- rtems.orig/cpukit/score/inline/rtems/score/chain.inl 2011-11-12 09:12:46.000000000 -0300
|
||||||
|
+++ rtems/cpukit/score/inline/rtems/score/chain.inl 2011-11-12 09:13:47.000000000 -0300
|
||||||
|
@@ -297,7 +297,7 @@
|
||||||
|
const Chain_Node *the_node
|
||||||
|
)
|
||||||
|
{
|
||||||
|
- return (the_node->previous == NULL);
|
||||||
|
+ return the_node->previous->previous == NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** @brief Is this the Last Node on the Chain
|
||||||
|
@@ -314,7 +314,7 @@
|
||||||
|
const Chain_Node *the_node
|
||||||
|
)
|
||||||
|
{
|
||||||
|
- return (the_node->next == NULL);
|
||||||
|
+ return the_node->next->next == NULL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** @brief Does this Chain have only One Node
|
98
m1/patches/rtems/coremsgsubmit-race.patch
Normal file
98
m1/patches/rtems/coremsgsubmit-race.patch
Normal file
@ -0,0 +1,98 @@
|
|||||||
|
This issue is under review:
|
||||||
|
https://www.rtems.org/bugzilla/show_bug.cgi?id=1961
|
||||||
|
|
||||||
|
If it's permissible to call rtems_message_queue_send from an
|
||||||
|
interrupt, then there is at least one race condition in the core
|
||||||
|
message subsystem.
|
||||||
|
|
||||||
|
This created the MIDI/mouse hang we love so much on M1.
|
||||||
|
|
||||||
|
The problem is as follows: RTEMS queues use pre-allocated message
|
||||||
|
buffers that are kept on an "inactive" (free) list. When enqueuing
|
||||||
|
a message, a buffer is first removed from the inactive list, data
|
||||||
|
it copied to it, and it is then added to the pending list.
|
||||||
|
|
||||||
|
The reverse happens when dequeuing. Besides these two queues, there
|
||||||
|
is also a counter called number_of_pending_messages keeping track,
|
||||||
|
as the name suggests, of the number of pending messages. It is
|
||||||
|
updated atomically together with changes to the pending buffers
|
||||||
|
list.
|
||||||
|
|
||||||
|
From the above it is clear that the counter will be out of sync with
|
||||||
|
the inactive list between the beginning and the end of an enqueue or
|
||||||
|
dequeue operation.
|
||||||
|
|
||||||
|
In order to minimize interrupt latency, RTEMS disables interrupts
|
||||||
|
only when adding and removing buffers from lists, but not throughout
|
||||||
|
the whole enqueuing/dequeuing operation. Instead, it disables the
|
||||||
|
scheduler during these operations, but this doesn't prevent
|
||||||
|
interrupts.
|
||||||
|
|
||||||
|
This means that the inconsistency between number_of_pending_messages
|
||||||
|
and the inactive list can be observed from an interrupt handler if
|
||||||
|
enqueuing or dequeuing is in progress.
|
||||||
|
|
||||||
|
_CORE_message_queue_Submit checks whether there is still room in the
|
||||||
|
queue by reading number_of_pending_messages. If there is room, it
|
||||||
|
then calls _CORE_message_queue_Allocate_message_buffer to obtain a
|
||||||
|
free buffer.
|
||||||
|
|
||||||
|
Given that number_of_pending_messages and the list of inactive
|
||||||
|
buffers can disagree, e.g., if _CORE_message_queue_Seize or another
|
||||||
|
_CORE_message_queue_Submit is executing concurrently,
|
||||||
|
_CORE_message_queue_Allocate_message_buffer may fail to obtain a
|
||||||
|
free buffer despite the prior test.
|
||||||
|
|
||||||
|
_CORE_message_queue_Allocate_message_buffer can detect a lack of
|
||||||
|
free buffers and indicates it by returning a NULL pointer. Checking
|
||||||
|
whether NULL has been returned instead of a buffer is optional and
|
||||||
|
depends on RTEMS_DEBUG.
|
||||||
|
|
||||||
|
If no check is performed, _CORE_message_queue_Submit will then try
|
||||||
|
to use the buffer. In the absence of hardware detecting the
|
||||||
|
de-referencing of NULL pointers, the wounded system will limp on a
|
||||||
|
little further until, at least in the case of M1, it finally hangs
|
||||||
|
somewhere.
|
||||||
|
|
||||||
|
The patch below avoids the problem in the scenario described above
|
||||||
|
by not using number_of_pending_messages as an indicator of whether
|
||||||
|
free buffers are available, but by simply trying to get a buffer,
|
||||||
|
and handling the result of failure.
|
||||||
|
|
||||||
|
This is similar to how _CORE_message_queue_Seize works.
|
||||||
|
|
||||||
|
Another possibility would be to make testing of the_message no
|
||||||
|
longer optional. But then, there would basically be two tests for
|
||||||
|
the same condition, which is ugly.
|
||||||
|
|
||||||
|
- Werner
|
||||||
|
|
||||||
|
Index: rtems/cpukit/score/src/coremsgsubmit.c
|
||||||
|
===================================================================
|
||||||
|
--- rtems.orig/cpukit/score/src/coremsgsubmit.c 2011-11-12 09:15:12.000000000 -0300
|
||||||
|
+++ rtems/cpukit/score/src/coremsgsubmit.c 2011-11-12 09:15:17.000000000 -0300
|
||||||
|
@@ -101,21 +101,9 @@
|
||||||
|
* No one waiting on the message queue at this time, so attempt to
|
||||||
|
* queue the message up for a future receive.
|
||||||
|
*/
|
||||||
|
- if ( the_message_queue->number_of_pending_messages <
|
||||||
|
- the_message_queue->maximum_pending_messages ) {
|
||||||
|
-
|
||||||
|
- the_message =
|
||||||
|
- _CORE_message_queue_Allocate_message_buffer( the_message_queue );
|
||||||
|
-
|
||||||
|
- #if defined(RTEMS_DEBUG)
|
||||||
|
- /*
|
||||||
|
- * NOTE: If the system is consistent, this error should never occur.
|
||||||
|
- */
|
||||||
|
-
|
||||||
|
- if ( !the_message )
|
||||||
|
- return CORE_MESSAGE_QUEUE_STATUS_UNSATISFIED;
|
||||||
|
- #endif
|
||||||
|
-
|
||||||
|
+ the_message =
|
||||||
|
+ _CORE_message_queue_Allocate_message_buffer( the_message_queue );
|
||||||
|
+ if ( the_message ) {
|
||||||
|
_CORE_message_queue_Copy_buffer(
|
||||||
|
buffer,
|
||||||
|
the_message->Contents.buffer,
|
48
m1/patches/rtems/lm32-stack-alignment.patch
Normal file
48
m1/patches/rtems/lm32-stack-alignment.patch
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
This patch is under review:
|
||||||
|
https://www.rtems.org/bugzilla/show_bug.cgi?id=1956
|
||||||
|
|
||||||
|
The comments in cpukit/score/cpu/lm32/rtems/score/cpu.h state that
|
||||||
|
CPU_STACK_ALIGNMENT should either be 0 or >= CPU_ALIGNMENT. The
|
||||||
|
latter was 8, the former is 4.
|
||||||
|
|
||||||
|
Further investigation revealed that, contrary to what the comment
|
||||||
|
says, 0 is not a valid value for CPU_STACK_ALIGNMENT.
|
||||||
|
|
||||||
|
This patch sets CPU_ALIGNMENT to 4, since we don't have any machine
|
||||||
|
word larger than that. (doubles and long longs are handled by
|
||||||
|
software and either extremely slow already or rare.)
|
||||||
|
|
||||||
|
The patch also corrects the misleading comment before
|
||||||
|
CPU_STACK_ALIGNMENT.
|
||||||
|
|
||||||
|
I'm not sure if this fix has any real-life impact on M1 behaviour,
|
||||||
|
but I guess it can't hurt.
|
||||||
|
|
||||||
|
- Werner
|
||||||
|
|
||||||
|
Index: rtems/cpukit/score/cpu/lm32/rtems/score/cpu.h
|
||||||
|
===================================================================
|
||||||
|
--- rtems.orig/cpukit/score/cpu/lm32/rtems/score/cpu.h 2011-11-12 03:01:35.000000000 -0300
|
||||||
|
+++ rtems/cpukit/score/cpu/lm32/rtems/score/cpu.h 2011-11-12 03:03:46.000000000 -0300
|
||||||
|
@@ -637,7 +637,7 @@
|
||||||
|
*
|
||||||
|
* XXX document implementation including references if appropriate
|
||||||
|
*/
|
||||||
|
-#define CPU_ALIGNMENT 8
|
||||||
|
+#define CPU_ALIGNMENT 4
|
||||||
|
|
||||||
|
/**
|
||||||
|
* This number corresponds to the byte alignment requirement for the
|
||||||
|
@@ -687,9 +687,10 @@
|
||||||
|
* stack. This alignment requirement may be stricter than that for the
|
||||||
|
* data types alignment specified by @ref CPU_ALIGNMENT. If the
|
||||||
|
* @ref CPU_ALIGNMENT is strict enough for the stack, then this should be
|
||||||
|
- * set to 0.
|
||||||
|
+ * set to @ref CPU_ALIGNMENT.
|
||||||
|
*
|
||||||
|
- * @note This must be a power of 2 either 0 or greater than @ref CPU_ALIGNMENT.
|
||||||
|
+ * @note This must be a power of 2 either equal to or greater than
|
||||||
|
+ * @ref CPU_ALIGNMENT.
|
||||||
|
*
|
||||||
|
* Port Specific Information:
|
||||||
|
*
|
32
m1/patches/rtems/rbtree-container-of.patch
Normal file
32
m1/patches/rtems/rbtree-container-of.patch
Normal file
@ -0,0 +1,32 @@
|
|||||||
|
This issue is under consideration:
|
||||||
|
http://www.rtems.org/pipermail/rtems-users/2011-November/009097.html
|
||||||
|
|
||||||
|
There seem to be two issues in the original code:
|
||||||
|
|
||||||
|
- the "node" argument of the macro is not protected, which could lead
|
||||||
|
to very hard to find errors (this doesn't seem to cause any
|
||||||
|
immediate problems just now, but it's a lousy risk to take)
|
||||||
|
|
||||||
|
- more seriously, "offsetof" counts in bytes while arithmentic on the
|
||||||
|
"node" pointer counts in multiples of whatever size that object has
|
||||||
|
|
||||||
|
RTEMS with this patch applied runs well (on Milkymist), but I don't
|
||||||
|
know if the code in question actually is executed.
|
||||||
|
|
||||||
|
- Werner
|
||||||
|
|
||||||
|
Index: rtems/cpukit/score/include/rtems/score/rbtree.h
|
||||||
|
===================================================================
|
||||||
|
--- rtems.orig/cpukit/score/include/rtems/score/rbtree.h 2011-11-12 08:52:50.000000000 -0300
|
||||||
|
+++ rtems/cpukit/score/include/rtems/score/rbtree.h 2011-11-12 09:00:35.000000000 -0300
|
||||||
|
@@ -90,7 +90,9 @@
|
||||||
|
*
|
||||||
|
*/
|
||||||
|
#define _RBTree_Container_of(node,container_type, node_field_name) \
|
||||||
|
- ((container_type*) (node - offsetof(container_type,node_field_name)))
|
||||||
|
+ ((container_type*) ((void *) (node) - \
|
||||||
|
+ offsetof(container_type,node_field_name)))
|
||||||
|
+
|
||||||
|
|
||||||
|
/**
|
||||||
|
* This type indicates the direction.
|
4
m1/patches/rtems/series
Normal file
4
m1/patches/rtems/series
Normal file
@ -0,0 +1,4 @@
|
|||||||
|
lm32-stack-alignment.patch
|
||||||
|
rbtree-container-of.patch
|
||||||
|
chain-first-last.patch
|
||||||
|
coremsgsubmit-race.patch
|
Loading…
Reference in New Issue
Block a user