04 April 2013

Interview with Lena Bayeva

Lena Bayeva is a Computational Linguist at Textkernel, an innovative company that specializes in multilingual semantic recruitment technology. In this interview, Lena shares her experience with job hunting, tells us her story and opinions on present and future of NLP and Computational Linguistics.

  • How did you get to the world of  computational linguistics? Please tell us where and what did you study and what is your previous work experience?
I started out studying Computer Science at Portland State University. During this time I took a few Machine Learning courses, which got me really excited about the field. I then worked as a developer for IBM (Oregon), but never gave up the idea of studying Machine Learning. I went on to get a Masters in Artificial Intelligence at the University of Amsterdam with a focus on Machine Learning and Information Extraction. Machine Learning courses gave me a very good foundation for Computational Linguistics that uses a lot of the general ML algorithms. Understanding the mechanics behind learning is quite important when applying it to solve a specific problem.  Instead of blindly applying some learning algorithm it helps to know things like what types of learning algorithms are out there and what are the differences, why some methods are better for certain domains/data sets than others,  understanding the source of error, and so on. A course in Elements of Language Processing and Learning was quite helpful as well. It was based on a Speech and Language Processing book by Jurafski and Martin. I highly recommend it as a starting point.
  • What work do you do now? What is your company and your current responsibilities?
I’m currently working for Textkernel - a company that uses language technology to deliver solutions to HR sector that include CV parsing, Semantic Search and Match (of candidates to vacancies and visa versa). I’m happy to say that I get to apply my Machine Learning skills at my job. Among my responsibilities are  development of multi-lingual information extraction models, preprocessing of text and post processing the result, evaluation and error analysis. Of course there is a lot of software engineering involved as well.  
  • How difficult was it to find a job in industry after graduation?
I was lucky to have found a job rather quickly. There are a few small companies in Amsterdam that use some form of AI, but many more of them across Europe and US.  If you are willing to relocate there are plenty of job opportunities.
  • What is your advice to the recent graduates looking for a job in industry?
There are a lot of articles on the web that provide useful advice to recent graduates and job seekers in general. I’m going to mention a few things that I found useful personally.
  1. Think of your dream job and focus your search keeping both your experience to date and your dream job in mind. Your first job might not be the dream job, but it should take you a step or more closer to the dream job and help you develop the necessary skills.  
  2. Networking always helps - talk to your friends and professors, and join the groups that specialize in your field.  
  3. Capitalize on your achievements - projects you’ve worked on, perhaps your thesis or publications, anything to show the employers you have a deep understanding of  some aspect of the field and/or relevant experience.  
  • What do you think about the near future of NLP – which areas are going to grow in the next few  years?
First and foremost, I see the need for development of NLP methods that effectively use unlabeled data for learning. Much has been done in the direction of semi-supervised and unsupervised learning, however there’s still a need for better understanding of when the unlabeled data can be advantageous and how it can be best incorporated.

Second, transfer and multi task learning techniques are a promising way to reuse shared information across domains/tasks. It is often the case that some of the information from a given domain(s) for which we have a lot of data is shared with some other domains for which the data is less plentiful. Ideally we would like to reuse/transfer as much information as possible between domains to help build better models, but this should be accomplished without introducing noise (negative transfer). Exactly what information can be transferred and when is an important problem that needs to be investigated in depth.

Third, an interesting area is feature extraction and data representation. How can we (mostly) automatically extract features useful for a particular learning task? Much has been done in this direction as well, but this problem is far from solved.

Another interesting area is systems that are capable of easily incorporating new data and learning overtime. In this regard, the NELL project by Tom Mitchell (Carnegie Mellon University) is quite inspiring.

In general, I would like to see different NLP and Machine Learning methods combined in order to build more complete NLP systems. Starting with a representation appropriate for the learning task, to modeling data with methods that use transfer techniques to reuse common knowledge, unlabeled data to deal with sparseness, clustering and other techniques to deal with ambiguity in classes, and so on.


About the author:
Maxim Khalilov, PhD is the R&D manager at TAUS B.V and the co-founder of NLPPeople.com. He is a former post-doctoral researcher at the University of Amsterdam, intern at Macquarie University (Australia) and a PhD student at the Polytechnic University of Catalonia (Spain).

1 comment:

  1. Outstanding blog, in my opinion site owners should acquire a great deal out of this blog its very user welcoming. MacFarlane Gro

    ReplyDelete