Senior Data Engineer

Taipei
Engineering /
Remote First /
Hybrid
About us
Gogolook is a leading TrustTech company established in 2012. With "Build for Trust" as its core value, it aims to create an AI- and data-driven global anti-fraud network as well as Risk Management as a Service. From multi-communication to fintech, and SaaS, Gogolook creates trustworthy empowerment with the use of technology in various fields.
A founding member of the Global Anti-Scam Alliance (GASA), Gogolook has also teamed up with a number of institutes such as the Taiwan National Police Agency Criminal Investigation Bureau, the Financial Supervisory Service of South Korea, Thai Royal Police, the Fukuoka city government, the Philippines Cybercrime Investigation and Coordinating Center, and the Royal Malaysia Police and state government to fight fraud and ultimately, to build a trustworthy communication network with the largest number database in East Asia and Southeast Asia. In July 2023, TrustTech provider Gogolook completed its IPO listing on the Taiwan Innovation Board (TIB) of the Taiwan Stock Exchange (TWSE), under the ticker number 6902.

Why you should join Gogolook
1. Influential products: What we make are meaningful products that create values for society and defend against frauds.
2. Emphasize self-growth: We encourage technical community activities, subsidize tickets for conferences and workshops so that learning is continuously supported by the company.
3. Unleash your talent: We respect the professional opinions of everyone, encourage team members to discuss with each other, and make awesome products together.
4. Transparent culture: We publicly share the company's information to all, every member can read and feedback, and become a part of participating in the proposal.

The existing team at Gogolook is primarily engaged in application development within the FinTech sector. Leveraging a variety of rich data sources, we strive to create valuable models, thus, we are actively seeking data engineers who exhibit a strong passion for data platforms.

As a data engineer, you will assume a significant role in designing, developing, and maintaining our extensive ETL pipelines, storage, and processing services. Your responsibility includes ensuring the stability of ETL pipelines and integrating a variety of data sources comprehensively to support development and analysis teams. Your expertise is crucial for establishing a data platform, with an emphasis on achieving high flexibility and scalability. Your primary duties revolve around consistently guaranteeing the accuracy of data assets for users.

If you are enthusiastic about working at the intersection of data engineering and FinTech application development, this role offers a dynamic opportunity to contribute to the creation of precise and reliable data assets for impactful decision-making.

Responsibilities

    • Work in conjunction with Data Analyst, PM to understand data needs and interact with Data Engineers to understand ETL practices and data models to ensure high quality data deliverable.
    • Track, monitor debugging process and deliver document testing results.
    • Design, develop and maintain data quality assurance framework.
    • Analyze complex data logic to develop automated and reusable solutions for extracting requested information while assuring data validity and integrity.

Minimum qualifications

    • Minimum of 3 years of hands-on software development experience in languages including Python.
    • Hands-on experience with cloud platforms like AWS.
    • Proficiency in developing robust data pipelines, including data collection and ETL (Extract, Transform, Load) processes such as Airflow.
    • Experience designing and implementing various components of a data platform, including data ingestion, storage, data warehousing, data orchestration.
    • Strong expertise in SQL and experience in optimizing the performance to ensure efficient data retrieval and processing.
    • Thorough understanding of distributed systems, including Hadoop, Presto, Spark. etc.
    • Experience in continuous integration and continuous deployment (CI/CD) processes.

Preferred qualifications

    • Experience in establishing and implementing data governance practices.
    • Experience in machine learning platforms.
    • Experience in docker.
    • BS/MS degree in Computer Science, Engineering or a related subject or equivalent professional experience in the field.