author | Don Dominic <a0486429@ti.com> | |
Wed, 23 Jun 2021 17:56:27 +0000 (23:26 +0530) | ||
committer | Ankur <ankurbaranwal@ti.com> | |
Thu, 24 Jun 2021 16:27:30 +0000 (11:27 -0500) | ||
commit | 043f8756301f95ca28608e65b60a7127a7125ec3 | |
tree | 39921ef32ba4473c40e0a4b0a5220079c6ee4373 | tree | snapshot (tar.xz tar.gz zip) |
parent | a8aabab44d7fdc7e29ed994d49141cb1d2dcc0ff | commit | diff |
[PRSDK-8844] IPC: Lock Mutex in RPMessage_recvNb() API
- This is to avoid semaphore count overflow, in cases where
RPMessage_recvNb() is used in non-baremetal scenario.
Because in non-baremetal case whenever a new message is received,
RPMessage_enqueMsg internal API will post the semaphore (call unlockMutex)
This may cause overflow and hence call lockMutex with timeout zero
- Validated multicore ipc echo test with mcu1_0(rtos) <-> mcu2_0(baremetal)
Signed-off-by: Don Dominic <a0486429@ti.com>
- This is to avoid semaphore count overflow, in cases where
RPMessage_recvNb() is used in non-baremetal scenario.
Because in non-baremetal case whenever a new message is received,
RPMessage_enqueMsg internal API will post the semaphore (call unlockMutex)
This may cause overflow and hence call lockMutex with timeout zero
- Validated multicore ipc echo test with mcu1_0(rtos) <-> mcu2_0(baremetal)
Signed-off-by: Don Dominic <a0486429@ti.com>
packages/ti/drv/ipc/src/ipc_api.c | diff | blob | history |