What is the proper way of doing @DirtiesConfig when used @EmbeddedKafka

扶醉桌前 提交于 2019-12-11 19:08:58

问题


We have a "little" problem in our project with: "Connection to node 0 could not be established. Broker may not be available." Tests runs very very long time, and this message is logged at least once every second. But I found out, how to get rid of it. Read on. If there is something incorrect in configurations/annotations, please let me know.

Versions first:

<springframework.boot.version>2.1.8.RELEASE</springframework.boot.version>

which automatically brings

<spring-kafka.version>2.2.8.RELEASE</spring-kafka.version>

Now we will consider this integration test annotated by:

@RunWith(SpringRunner.class)
@Import(/*some our configuration, irrelevant*/ )
@ActiveProfiles(/*some our profiles, irrelevant*/)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@EmbeddedKafka(controlledShutdown = true)
@Transactional

And then we have some tests in it, like:

@Test
@DirtiesContext

public void testPostWhatever() throws JSONException, IOException {

This test just creates some request data, invoke post, which will in turn persist data into DB. Then we will use GET to find out, whether we can find these data. Trivial. A little bit weird thing to me is transaction handling here. Test class is annotated with @Transactional, but according to log transaction is opened only on Controller method, which has in this example (it should be on service, sure) same @Transactional annotation. Both are with TxType.REQUIRED propagation. That will cause rollback initiated by the test to actually rollback nothing, as transaction was already committed. If you know, why it's like that, please advise. But this is not the crux of this question. So far we just put @DirtiesContext on this method, which should just reinitialize context. It solves the issue with not-rollbacked data, at the high cost of context reinitialization. But following messages starts appearing in log:

2019-10-01 13:49:07.336 org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-2] Connection to node 0 could not be established. Broker may not be available.
2019-10-01 13:49:07.699 org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-10-01 13:49:08.191 org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-2] Connection to node 0 could not be established. Broker may not be available.
2019-10-01 13:49:08.603 org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-1] Connection to node 0 could not be established. Broker may not be available.
2019-10-01 13:49:08.927 o.a.c.loader.WebappClassLoaderBase       : The web application [ofs] appears to have started a thread named [kafka-producer-network-thread | producer-2] but has failed to stop it. This is very likely to create a memory leak. Stack trace of thread:
 sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
 sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
 sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
 sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
 sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
 org.apache.kafka.common.network.Selector.select(Selector.java:691)
 org.apache.kafka.common.network.Selector.poll(Selector.java:411)
 org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:510)
 org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
 org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
 java.lang.Thread.run(Thread.java:748)

Removing this @DirtiesContext and placing it on class level like

@DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD)

have same behavior(except for with ridiculous extra overhead). But if I remove all @DirtiesContext, and clear db manually and commit changes, so that after each test the changes are reverted, everything works just fine, no warning or error.

So I think there are 2 things. My problem is caused by incorrect tx handling (please help), but @DirtiesContext should be possible to use with spring-kafka as well, and that seems not to be working. Either it's not possible in principle (or it is?), or I have something configured incorrectly (please help), or it is maybe a bug?


回答1:


If you are using JUnit4, you can use EmbeddedKafkaRule as a @ClassRule instead of using @EmbeddedKafka and the broker is then not added to the context.

Unfortunataly, there is no equivalent for JUnitt5 - but you can still add a static EmbeddedKafkaBroker and destroy it yourself in an @AfterAll method.



来源:https://stackoverflow.com/questions/58187190/what-is-the-proper-way-of-doing-dirtiesconfig-when-used-embeddedkafka

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!