I am new to tensorflow and machine learning. I am facing issues with writing a tensorflow code which does the text classification similar to one I tried using sklearn libraries.
this question is a bit broad. Perhaps you can take a look at the tutorial posted on Tensorflow's website for binary text classification (positive and negative) and try to implement it. During the process, if you come across any problems or concepts that need further explanation, search StackOverflow to see if someone has asked a question similar to yours. If not, take the time to write a question following these guidelines so people with the ability to answer will have all the information they need. I hope this information gets you off to a good start and welcome to Stack Overflow!
If you want to achieve seminal scores I'd rather use some embedder. Natural language is rather quite hyper-dimensional. Nowadays there's a lot of pretrained architectures. So, you simply encode your text to latent space and later train your model on those features. It's also much easier to apply resampling techniques, once you have numerical feature vector.
Myself, I mostly use LASER embedder from Facebook. Read more about it here. There's unofficial pypi package, which works just fine. Additionally, your model will be working on dozens of languages out-of-the-box, which is quite cute.
There's also BERT from Google, but the pretrained model is rather bare, so you have to push it a bit further first.