Binance Accelerator Program - Data Engineer
Asia / Australia, Melbourne / Australia, Brisbane / Australia, Sydney / Taiwan, Taipei
Engineering – Big Data /
Early Careers /
Remote
Binance is the global blockchain company behind the world’s largest digital asset exchange by trading volume and users, serving a greater mission to accelerate cryptocurrency adoption and increase the freedom of money.
Are you looking to be a part of the most influential company in the blockchain industry and contribute to the crypto-currency revolution that is changing the world?
About Binance Accelerator Program
Binance Accelerator Program is a concise fixed-term program designed for Early Career Talent to have an immersive experience in the rapidly expanding Web3 space. You will be given the opportunity to experience life at Binance and understand what goes on behind the scenes of the worlds’ leading blockchain ecosystem. Alongside your job, there will also be a focus on networking and development, which will expand your professional network and build transferable skills to propel you forward in your career. Learn about BAP Program HERE
Who may apply
Current university students and recent graduates who can commit at least 6 months tenure, minimum 3 days per week
Responsibilities:
- Participate in big data application development including Binance square(feed), search and other related project development
Requirements:
- Computer-related major, currently pursuing a Bachelor's or Master's degree
- Have a solid Java/python foundation and understand basic frameworks, such as multithreading and networking
- Familiar with Spring MicroService framework and understanding of its operation mechanism
- Familiar with MySQL database and common middleware such as Redis, RabbitMQ and Kafka
- Good logical thinking ability, strong sense of teamwork and strong learning ability
- Good communication skills, bilingual English/Mandarin is required to be able to coordinate with overseas partners and stakeholder
- Experience with open source projects is preferred
- BigData stack like Hadoop, Hbase, spark, have experiment on building batch/streaming dataflow is preferred