Smart Bluetooth Forum Discussions
Is there a way to force a specific peripheral to disconnect from a central device when multiple peripherals are connected to a central device?
There seems only to be blecm_disconnect(BT_ERROR_CODE_CONNECTION_TERMINATED_BY_LOCAL_HOST), but how is the peripheral specified? I have tried using emconinfo_setConnHandle(), and this seems to work sometime but other times emconinfo starts returning bogus data for the other peripherals.
Can you provide some pseudo-code to show how this "should" be done?
Many thanks
Show LessThis maybe a very simple but the pieces just aren't falling in place for me. I'm working on an embedded product that will need to communicate with both iOS and Andriod and determine which is attempting to communicate with it. I'm currently working on setting up the GATT and services. Is there anything special I need to include or handle to allow my peripheral product to determine if it is communicating with an iOS or Andriod device?
thanks in advance and sorry if this is something I've simply overlooked,
Michelle
Show LessI connected my jtag probe, IAR JLINK yellow box, to BCM20737_TAG board, but the connection was failure. Is that the probe's problem which is not original segger box?
Show Less
I am continuing the discussion started here: Re: how can i use to i2c between bmi160 sensor and BCM20732? with fresh new post.
I tried the I2C API defined in i2cm.h with no success. I am using SDK 2.1.0 in the current example. I put a quick recap of the source code that I wrote to test the API.
> In the main function (executed once at boot):
....
//Initialization of the i2cm interface.
i2cm_init();
i2cm_setSpeed(I2CM_SPEED_100KHZ);
i2cm_setTransactionSpeed();
> In one second timer function (executed every second):
UINT8 readData[2];
UINT8 writeData=0xe3;
...
//write 0xe3 @ device with address 0x40, and read 2 bytes of data.
//0x80 is actually (0x40<<1), 0x40 is the address of the sensor I am trying to access.
UINT8 status = i2cm_comboRead(&readData[0], 2, &writeData, 1, 0x80);
switch(status)
{
case I2CM_SUCCESS:
ble_trace0("processI2C_CMD> Success!\n");
break;
case I2CM_OP_FAILED:
ble_trace0("processI2C_CMD> Failed!\n");
break;
case I2CM_BUSY:
ble_trace0("processI2C_CMD> Busy!\n");
break;
}
I ran this piece of code on one device integrated with 20737S that has the sensor connected to it without success. The returned value is **I2CM_BUSY**.
I also ran the same code on the tag board#3 with the 20737 chip. My expectation here is that I should see at least the first write going out to 0x40 with nothing coming back. I don't see anything on the bus (I am connected to J9 connector tag board #3, pin 2(SCL) and 3(SDA)) and the function would return I2CM_BUSY, same as above.
Please comment, Thanks (my current back up plan is to use the API defined in cfa.h)
Note: if I sniff the traffic on the I2C bus during the download of an application I do see the access to EEPROM with address A0 and A1 during 5 seconds, so my I2C measurement setup is correct.
Show LessHi, I am trying to use SPI2 Slave P26_CS_P24_CLK_P33_MOSI_P2_MISO.
I used this code and changed spi2PortConfig.spiGpioConfig = SLAVE2_P02_CS_P03_CLK_P00_MOSI_P01_MISO;
to
spi2PortConfig.spiGpioConfig = SLAVE2_P26_CS_P24_CLK_P33_MOSI_P2_MISO;
but doesn't work. Any thoughts?
void spiffy2_slave_initialize(void)
{
// To use SPIFFY2 as slave
spi2PortConfig.masterOrSlave = SLAVE2_CONFIG;
// To pull for MISO for slave, MOSI/CLOCK/CS if we are slave mode
spi2PortConfig.pinPullConfig = INPUT_PIN_PULL_DOWN;
// To use P3 for CLK, P0 for MOSI and P1 for MISO in SLAVE mode
spi2PortConfig.spiGpioConfig = SLAVE2_P02_CS_P03_CLK_P00_MOSI_P01_MISO;
// DO NOT CONFIGURE CS_PORT and CS_PIN in SLAVE mode - the HW takes care of this. // There is no need to configure the speed too – the master selects the speed.
// Initialize SPIFFY2 instance
spiffyd_init(SPIFFYD_2);
// Configure the SPIFFY2 HW block
spiffyd_configure(SPIFFYD_2, SPEED, SPI_MSB_FIRST, SPI_SS_ACTIVE_LOW, SPI_MODE_3);
}
Revision | Change Description | Date |
---|---|---|
1.0 | Initial Draft | 10/02/14 - 5:00PM |
Problem connecting to the WICED Sense Kit
Some android phones/tables (particularly ones running on 4.3 Jellybean) appear to have issues pairing to the WICED Sense Tag from inside the app.
We've tried to take care of this by adding explicit Bluetooth pairing code into the WICED Sense App (by calling createBond() API calls).
This fix doesn't work every time on every device. If you cannot connect to the WICED Sense Kit, please try pairing to the device from the Settings app first.
a. Turn on the WICED Sense Tag
b. On the phone/tablet, go to Settings-> Bluetooth Settings screen
c. Scan for devices and find the "WICED Sense Kit"
d. If the WICED Sense Kit is already paired, unpair it first.
e. Explicitly pair with the WICED Sense Kit from the Settings app.
f. Launch the WICED Sense App and connect to the WICED Sense Kit
If that does not work,
a. Unpair the device from the Settings->Bluetooth Settings screen
b. Turn off Bluetooth
c. Turn on Bluetooth
d. Launch the WICED Sense App and connect to the WICED Sense Kit
Thanks
JT
Show LessDocumentation (page 15 and 16 of WICED Smart Hardware Interfaces) says that all 4 signals of PUART (RX,TX,CTS,RTS) must be selected from the same group. There are 2 groups and I noticed that in TAG3 3 signals are from 2nd group (RX, TX, RTS) and 1 signal is from 1st group (CTS). Is this a typo in the documentation or bad TAG3 layout ?
Documentation like “How to Write WICED Smart Applications” indicate
that the NVRAM ID range for Applications to use is from 0x10 to 0x6f.
Yet VS_BLE_HOST_LIST used in examples like cycling_speed_cadence
have a value of 0x70.
Is this invalid application use of NVRAM or is something
else happening when NVRAM is used in this range?
Show Less
Hi, We are looking at using the security features of the BCM20737 for authentication purposes, but this is all very new to us. The 20737 datasheet states that there are apps notes on the security features but I have not found anything on the website. There is a post on the forum that points to the comments in the RSA.h file, but this is insufficient for someone with little or no prior knowledge. The 20737 supports X.509 certificate exchange and we would really like some information on this too. Are there any plans for a detailed apps note on security/authentication?
Also, are there any plans to release a Technical Reference Manual for the 20737S, as you've done for the other modules?
Thanks
Show LessI had previously posted a question related to this problem here:
but I thought I would start a new thread since the problem may not be related to the EEPROM.
I have a design based on the 20732S that overall seems to work quite well. However, a few boards develop what appears to be a hardware problem with the UART after some period of time in use. Note that these boards are being used for development, so we reprogram them via the UART frequently (whereas in the application this will only happen once).
What we observe is that we will have a board that is working fine for a long time, and then at some point I try to program it via the UART and the board is no longer detected. From that point onwards, I can never get it to program again. There doesn't seem to be any particular event that triggers this, although in one case it happened after the serial line was accidentally disconnected during programming.
In trying to diagnose the problem I put a scope on the RX and TX lines of the UART, and I can see why the chip fails to detect.
Looking at the HCI serial port with a scope, I see unusual spikes on the TX line when the chip is transmitting. That is, I reset the chip with HCI_RX held high to bring it into the bootloader. Then, I use the IDE to detect and program. The IDE first goes through the detection process, and attempts to contact the boot loader over the HCI port by sending a string ‘ 01 03 0c 00 01 4d fc 05 1c d2 00 00 01 ‘. The chip responds with another string, but the response is garbled because of strange spikes that are occurring at exactly 1/10 the baud rate, at 11.52 kHz.
These spikes can be seen in the attached photo of the scope trace. You can see the blue trace is showing a bit on the TX line -- the chip is trying to respond to the detection string. However interspersed with the real bits are weird ramping up and spiking glitches. These are occurring at exactly 11.52kHz.
We are concerned about this problem because we are moving toward production and as yet we don't understand why this happens or what causes it. It might be OK if it is related to programming failure that only happens in development when we are reprogramming a lot from HCI (vs once in production)... but at this point we are nervous.
One more point to consider: we have found that we needed to add the PMU clock warmup time hack to get our boards to work, is it possible that this problem is somehow related to that?
I tried heating up the chip with a heat gun to see if I could program it, but this did not help. I have also attempted to hold down SDA on boot to try to reset the EEPROM, which did not work. I was doubtful that it would help, since the problem seems to be with the UART / bootloader.
So far on our development boards, this has happened on about 10% of the boards overall, which is a pretty high rate.
Show Less