Senior Data Scientist
Islamabad,PK
Engineering – Data Science /
Full Time /
On-site
About PackageX
PackageX automates data entry and manual logistics processes for receiving, inventory, and fulfilment in buildings, warehouses, and stores. It uses advanced AI scanning, flexible bolt-on apps, and APIs to drive exceptional workforce productivity, fulfilment efficiency, and real-time visibility.
Our vision is to build the most advanced logistics infrastructure company that orchestrates the movement of physical things and becomes the defining backbone of the digital supply chain.
We're a fast-growing pre-Series A stage startup in New York City with a distributed global team backed by Bullpen Capital, Pritzker Group, Sierra Ventures, Ludlow Ventures, MXV Capital, and NSV Wolf Capital.
What we are looking for
We’re looking for a hands-on data scientist with experience in building scalable analytics and machine-learning systems, who is fluent in Python (Pandas, NumPy), SQL, and ML/DL frameworks such as TensorFlow, PyTorch or Keras, with practical knowledge of NLP, computer vision and modern modeling techniques. You should be comfortable building backend services and RESTful APIs using Django/Flask, working with Docker and Git/GitHub, and deploying on cloud platforms (AWS/GCP) while using SQL Server, MongoDB or similar DBMS. The role involves designing data models, ETL pipelines and data warehouses, integrating enterprise BI/visualization tools (Tableau, Power BI, Matplotlib/Seaborn), and applying repeatable automation, testing and CI/CD practices. Ideal candidates have experience on distributed engineering teams, can move quickly on bug fixes and enhancements, communicate clearly, bring innovative solutions to tough problems, and have weathered high-pressure projects with lessons learned.
You will:
· Build scalable, efficient, and high-performance software around our data analytics platform to help us expand its reporting, analytics & machine learning capabilities.
· Build out frameworks and services for our Machine Learning services, systems, and dashboards.
· Follow coding best practices-Unit testing, design/code reviews, code coverage, documentation, etc.
· Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc
· Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics
· Respond quickly to bug fixes and enhancement requests and be able to take directions and complete tasks on-time with minimal supervision.
· Work with and help implement data models/data schema’s for data scientists & data analyst to build visualizations & statistical/predictive models.
We are looking for someone who has:
· At least 3+ years of experience working as a data scientist
· Expertise in Python, SQL, Pandas, NumPy
· Experience with any machine/deep learning frameworks like Tensorflow, Keras and PyTorch
· Understanding about depth and breadth of computer vision, NLP and deep learning algorithms
· Extensive expertise in Django, Flask
· Worked with Github, Microsoft SQL Server/MongoDB or another DBMS, AWS & GCP
· Been part of a distributed software engineering team
· Experience with RESTful APIs and server-side APIs integration
· Worked with Docker or other containerization software
· Experience with Data Warehousing, ETL Pipelines, large complex data sets, complex SQL queries/NoSQL platforms
· Experience in designing Data Models (Data schema)
· Desire to bring new and innovative solutions to the table to resolve challenging software issues as they may develop throughout the product lifecycle
· Sound knowledge of repeatable automated processes for building, testing, documenting and deploying an application at scale
· Been on at least one “death march” and know exactly why some things are to be avoided.
· Excellent communication and interpersonal skills, taking the initiative to recommend and develop innovative solutions
· Fluency in Python stack for data processing, visualization, and modeling (Matplotlib/Seaborn)
· Experience Working with AI/Analytics teams
· Worked for visualization development on Enterprise Tools (Tableau, Power BI)
· Worked on integration of Enterprise BI tools
· Understanding of data mining & machine learning concepts
What can you expect from the application process?
All applications will be looked at by the People team who will reach out to shortlisted candidates. Across various interview rounds, you'll speak with the hiring manager and other functional heads. We want to have an open discussion about your work and how we can be a great fit for each other. The process may also involve an assessment or presentation relevant to the role. You can expect an offer after three rounds of interviews. All offers are subject to satisfactory reference and background checks.