250 Ofertas de Data Engineer en Colombia
Data Engineer
Ayer
Trabajo visto
Descripción Del Trabajo
¡ÚNETE A NUESTRO EQUIPO! Somos una empresa líder en Investigación de Mercados en LATAM, con casa matriz en Panamá y presencia en 13 países de América Latina y el Caribe. Contamos con más de 30 años de experiencia ayudando a nuestros clientes a comprender la dinámica comercial de los canales moderno y tradicional, utilizando tecnología de vanguardia para generar valor y optimizar sus negocios.
En este momento nos encontramos en búsqueda de un(a) DATA ENGINEER, para nuestra división de TECNOLOGÍA, ofrecemos contrato indefinido, capacitación continua y perspectiva de crecimiento en dichter & neira.
El propósito general del cargo es analizar, desarrollar y optimizar las funcionalidades de los navegadores institucionales, garantizando su rendimiento y correcta explotación mediante la actualización constante, el análisis de requerimientos técnicos y funcionales, la evaluación de viabilidad, y la elaboración de documentación precisa. Asimismo, brindar asesoría y capacitación a los usuarios para prevenir incidentes y asegurar un uso eficiente de las herramientas.
Algunas de las funciones y responsabilidades son:
1. Elaborar el análisis funcional de nuevos estudios, así como actualizar y mejorar los ya existentes
2. Controlar, analizar y supervisar el desarrollo funcional de los Navegadores, asegurando su correcta explotación y su óptimo rendimiento
3. Realizar una labor de asesoramiento y capacitación a los distintos equipos de Proyectos, con el fin de evitar cualquier problema que pueda surgir con los programas y obtener así el máximo rendimiento de estos
4. Comprender a cabalidad las necesidades de cada cliente y transmitirla al equipo de trabajo de manera clara
5. Generar Insumos de Requerimientos para las solicitudes de los estudios, y proveer al equipo de desarrolladores los Formularios de Insumos y Plantillas de Requerimientos con el detalle de las solicitudes, entendiendo la necesidad del cliente final
Profesional en Ingeniería en sistemas/ Estadística / Desarrollo Industrias Con especialización profesional o diplomado Manejo de DataBricks, Azure Data Factory, SQL Servers (avanzado) Phyton
Experiencia en investigación de mercados, ingeniería de datos, desarrollo estratégico
2 años o más de experiencia en cargo similares.
BeneficiosDentro de nuestros beneficios están: flexibilidad horaria, trabajo hibrido, plan carrera, bonificación e incentivos, entre otros.
¡Si cumples con el perfil, te invitamos a que te postules y hagas parte de una compañía que te brindará crecimiento continuo!
Nivel mínimo de educación: Postgrado (Graduado)
dichter & neira es una empresa líder en comprender el comportamiento del consumidor y la dinámica de compra, tanto en el canal moderno como en el tradicional, aplicando tecnología de punta.
Nuestras soluciones de negocios se basan en cuatro pilares de expertise:
1. Canal Moderno: Analizamos la totalidad de las transacciones de los principales Retailers con tecnología big data.
2. Canal Tradicional: Dimensionamos y tecnificamos las tiendas de barrio, capturando de forma diaria y masiva, cada una de las transacciones a nivel SKU.
3. In-Store Execution: Mejoramos la ejecución de nuestros clientes, dotándolos de nuestro algoritmo propietario de Image Recognition Technology.
4. Consumidor: Comprendemos las motivaciones, hábitos, gustos y nivel de satisfacción de los consumidores.
Nos enorgullece acompañar a nuestros clientes a potenciar sus decisiones de mercado, a través de nuestro equipo de más de 450 ejecutivos.
dichter & neira fue fundada en 1986 y cuenta con operaciones en más de 10 países de Latinoamérica.
#J-18808-LjbffrData Engineer
Ayer
Trabajo visto
Descripción Del Trabajo
Estamos buscando al mejor talento para que se una a nuestro equipo en el desarrollo de un proyecto de alto impacto para uno de los principales grupos empresariales del país.
Serás miembro de Sumz, la compañía que está revolucionando la forma en la que entendemos la información y cómo trabajamos con ella. Si reconoces que los datos son un activo valioso y crees que en la tecnología está la clave para desarrollar una visión de valor centrada en el cliente, este rol es para ti.
Características del rol:
En este rol serás responsable por expandir y optimizar nuestra arquitectura y pipeline de datos, así como de optimizar el flujo y la recopilación de datos para equipos multifuncionales. Serás parte del equipo de tecnología desarrollando el área bajo las mejores prácticas y metodologías de clase mundial.
Buscamos personas bilingües con experiencia relevante de más de 1 año como data engineer.
Es requisito tener conocimiento en desarrollo de back-end, preferiblemente en python y con herramientas de Machine Learning. Además debes tener conocimiento en estructuras de datos, bases de datos relaciones, noSQL, repositorios tipo Big Data entre otros.
Debes ser una persona analítica, ordenada, auto-motivada y autónoma, ágil, con alta capacidad de ejecución y buen nivel de atención al detalle.
- Crear, proponer, implementar y mantener un pipeline y arquitectura de datos óptima
- Reunir conjuntos de datos grandes, complejos, estructurados y no estructurados que cumplan con los requisitos comerciales funcionales y no funcionales
- Identificar, diseñar e implementar mejoras de procesos internos: automatizar procesos manuales, optimizar la entrega de datos, y diseñar la infraestructura para una mayor escalabilidad
- Crear la infraestructura necesaria para la extracción, transformación y carga óptimas de datos de una amplia variedad de fuentes de datos utilizando tecnologías de "big data".
- Crear herramientas de análisis que utilicen la canalización de datos para proporcionar información procesable sobre la adquisición de clientes, la eficiencia operativa y otras métricas clave de rendimiento empresarial.
- Trabajar con diferentes stakeholders, el área de producto, de datos y de diseño para ayudar con los problemas técnicos relacionados con los datos y respaldar sus necesidades de infraestructura de datos
- Mantener nuestros datos separados y seguros a través de múltiples centros de datos y regiones de AWS, GCP, Azure u otras.
- Crear herramientas de datos para los miembros del equipo de análisis y científicos de datos que los ayuden a construir y optimizar nuestro producto para convertirlo en un líder innovador de la industria
- Trabajar con el área de data y análisis para lograr una mayor funcionalidad en nuestros sistemas de datos
- Profesional en ingeniería de sistemas, ingeniería electrónica o en áreas afines
- +2 años de experiencia demostrada como data engineer
- Conocimiento en Machine Learning
- Conocimiento en Python
- Conocimiento en arquitecturas BigData
- Conocimiento en estructuras de datos, bases de datos relaciones, noSQL, repositorios tipo BigData, entre otros.
- Excelentes habilidades de comunicación verbal y escrita
- Inglés intermedio - avanzado
- Plus: conocimiento en DevOps y Google Cloud Platform
• Oportunidades de crecimiento y desarrollo dentro de la compañía
• Un paquete de compensación competitivo y con beneficios
• La oportunidad de trabajar desde casa y en una zona exclusiva de Bogotá
#J-18808-LjbffrData Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
Direct message the job poster from PriceSmart
HR Transformation | People & Talent Strategy | Future-Ready Workforce | Science & Tech | C-Suite AdvisorWe’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize the greatest contributions to the company can come from anywhere in the organization, and we know that the next one could be yours!
Whats unique about this job (What you’ll do)
At PriceSmart, our data engineering capability is at the heart of how we scale, automate, and transform the member experience across 13 countries. From integrating APIs to building robust data warehouses in the cloud, this role plays a key part in shaping the digital backbone of our organization.
As a Data Engineer, you’ll design and maintain scalable pipelines and automation workflows that connect our systems, simplify our operations, and help us make faster, better-informed decisions. You’ll collaborate across automation, product, and infrastructure teams to deliver trusted, high-quality data at the speed of the business.
If you thrive at the intersection of engineering, automation, and problem-solving—this is your opportunity to build with purpose.
Responsibilities
- Develop process automation and integration workflows across APIs (REST/SOAP)
- Model and optimize relational databases (SQL Server, PostgreSQL, MySQL)
- Build and maintain cloud data warehouse structures using Snowflake
- Apply data warehousing principles (Kimball or Inmon) to drive scalable data architecture
- Leverage Git, CI/CD, and DataOps practices for versioning and continuous delivery
- Implement robust solutions using Informatica IICS (Cloud Data Integration or App Integration)
- Collaborate cross-functionally with developers, automation engineers, and business teams
- Ensure data quality, security, and lineage in all engineering processes
Bring your passion and expertise (Who you are)
- You bring 3–5+ years of experience as a Data Engineer, Software Developer, or Automation Engineer
- You have strong experience building process automation or web-based applications
- You know how to integrate APIs securely and efficiently (REST/SOAP)
- You’ve modeled and optimized relational databases in real-world environments
- You understand how to design data warehouses using Kimball or Inmon frameworks
- You’re comfortable using Git , setting up CI/CD pipelines , and managing cloud workflows
- You’ve worked with Snowflake and Informatica IICS (CDI/CAI) in a production environment
- You’re fluent in Spanish and English and thrive in cross-functional, multicultural teams
Some important intangibles
- You feel connected to our mission and values: Integrity, Respect, Accountability, Passion Community, and Continuous Improvement
- You are a self-starter who doesn’t need direct supervision to motivate you for success
- You enjoy sharing your quirkiness and talents with your coworkers
- You enjoy working hard
- You are full of energy for the things you see as challenging
- You are not fearful of acting with a minimum of planning
- You have the ability to remain calm when dealing with unforeseen constraints
Our Commitment
We not only embrace and celebrate the diversity of our membership base and communities but also strive to achieve the same in our employees. At PriceSmart, we are committed to equal employment opportunity, regardless of race, color, religion, national origin, gender, sexual orientation, age, disability, veteran status, or any other class protected by applicable law. We are proud to be an equal opportunity employer.
Get to know us
PriceSmart was founded with a purpose: to inspire and impact the lives and businesses of our Members, our employees, and our communities through the ethical delivery of the best quality goods and services at the lowest possible prices.
Throughout the years, we have constantly asked ourselves how we can do more and have a greater impact. We want to prove that we are a company that can grow, be profitable, and do good in the world, and we have learned that it takes a great organizational culture to achieve that goal.
At PriceSmart, you can look forward to company events, anniversaries celebrating our employees with more than 20, or 30 years of tenure, volunteering and learning opportunities, and just a great company filled with curious, kind folks. Dreaming up and sharing ideas aren’t responsibilities reserved for certain teams or leaders; the challenge of building our own culture is on all of our shoulders. That sense of community and belonging keeps us excited to walk through the door every day, wherever that door may be, in any of our 13 countries.
Seniority level- Mid-Senior level
- Full-time
- Information Technology and Engineering
- Industries: Retail, Technology, Information and Media, and IT Services and IT Consulting
Data Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
Press Tab to Move to Skip to Content Link
Select how often (in days) to receive an alert:
Thanks for your interest in ScotiaTech, Scotiabank's new and innovative Technology hub in Bogota.
Join a purpose driven winning team that promotes creativity and innovation in a fast-paced environment, where we’re always committed to results, in an inclusive, diverse, and high-performing culture.
Purpose
The Data Engineer will design, develop, and maintain data pipelines and infrastructure on Google Cloud Platform (GCP) to support the creation of scalable data products. Reporting to the Senior Data Engineer, this role focuses on implementing robust and efficient data solutions to enable data-driven decision-making and support business objectives.
Accountabilities
- Champions a customer focused culture to deepen client relationships and leverage broader Bank relationships, systems and knowledge.
- Build and maintain data pipelines and ETL/ELT processes on GCP to ensure reliable and efficient data flow for data products.
- Collaborate with the Senior Data Engineer and cross-functional teams (e.g., Data Scientists, Product Managers) to understand requirements and deliver high-quality data solutions.
- Implement data models, schemas, and transformations to support analytics and reporting needs.
- Ensure data quality, integrity, and performance by monitoring and optimizing data pipelines.
- Adhere to data governance, security, and compliance standards within GCP environments.
- Troubleshoot and resolve issues in data pipelines to minimize downtime and ensure operational efficiency.
- Contribute to the adoption of best practices and tools for data engineering, including documentation and testing.
- Stay updated on GCP services and data engineering trends to enhance pipeline capabilities.
- Contribute to the development of data governance and best practices for data handling .
- Understand how the Bank’s risk appetite and risk culture should be incorporate into in day-to-day activities and decisions.
- Actively pursues effective and efficient operations of his/her respective areas in accordance with Scotiabank’s Values, its Code of Conduct and the Global Sales Principles, while ensuring the adequacy, adherence to and effectiveness of day-to-day business controls to meet obligations with respect to operational, compliance, AML/ATF/sanctions and conduct risk.
- Champions a high-performance environment and contributes to an inclusive work environment.
Education / Experience / Other Information
- 3+ years of experience in data engineering, with hands-on expertise in building data pipelines on Cloud.
- Proficiency with GCP tools such as BigQuery, Dataflow, Pub/Sub, or Cloud Composer (nice to have).
- Strong skills in programming languages like Python, Spark and advanced SQL for data processing.
- Experience with data modeling, schema design, and data warehousing concepts.
- Familiarity with version control systems (e.g., Git, jira) and basic CI/CD practices is a plus.
- Understanding of data governance and security practices in cloud environments.
- Strong problem-solving skills and ability to work collaboratively in a team environment.
- Effective communication skills to translate technical concepts to non-technical stakeholders.
Working Conditions
Work in a standard office-based environment; non-standard hours are a common occurrence.
#LI-Hybrid
#COLGBS
Location(s): Bogotá or Home-Office
ScotiaTech is a business unit within ScotiaGBS, a Scotiabank Group company located in Bogota, Colombia. The ScotiaTech hub was created to support different technology systems and processes of the Bank. We offer an inclusive, positive work environment, and competitive benefits.
At ScotiaTech, we value the unique skills and experiences each individual brings and are committed to creating and maintaining an inclusive and accessible environment for everyone. Candidates must apply directly online to be considered for this role. We thank all applicants for their interest in a career at ScotiaTech; however, only those candidates who are selected for an interview will be contacted.
Note: All postings in will remain live for a minimum of 5 days.
#J-18808-LjbffrData Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
Solvd is an AI-first advisory and digital engineering firm delivering measurable business impact through strategic digital transformation. Taking an AI-first approach, we bridge the critical gap between experimentation and real ROI, weaving artificial intelligence into everything we do and helping clients at all stages accelerate AI integration into each process layer. Our mission is to empower passionate people to thrive in the era of AI while maintaining rigorous ethical AI standards. We’re supported by a global team with offices in the USA, Poland, Ukraine and Georgia.
We are looking for a Data Engineer to develop an AI-powered data mapping recommendation platform to speed up the integration and validation of complex datasets. The system will automate data extraction, mapping, and validation processes that currently require extensive manual effort due to inconsistencies in source data, reliance on domain-specific code mappings, and heuristic-based validation.
Responsibilities- Build and maintain scalable data pipelines with Databricks, Spark, and PySpark.
- Manage data governance, security, and credentials using Unity Catalog and Secret Scopes.
- Develop and deploy ML models with MLflow; work with LLMs and embedding-based vector search.
- Apply ML/DL techniques (classification, regression, clustering, transformers) and evaluate using industry metrics.
- Design data models and warehouses leveraging dbt, Delta Lake, and Medallion architecture.
- Work with healthcare data standards and medical terminology mapping.
- Databricks Expertise
- Candidates must demonstrate strong hands-on experience with the Databricks platform, including:
- Unity Catalog : Managing data governance, access control, and auditing across workspaces.
- Secret Scopes : Secure handling of credentials and sensitive configurations.
- Apache Spark / PySpark : Writing performant, scalable distributed data pipelines.
- MLflow : Managing ML lifecycle including experiment tracking, model registry, and deployment.
- Vector Search : Working with vector databases or search APIs to build embedding-based retrieval systems.
- LLMs (Large Language Models) : Familiarity with using or fine-tuning LLMs in Databricks or similar environments.
- Data Engineering Skills
- Experience designing and maintaining robust data pipelines:
- Data Modeling & Warehousing : Dimensional modeling, star/snowflake schemas, SCD (Slowly Changing Dimensions).
- Modern Data Stack : Familiarity with dbt , Delta Lake, and the Medallion architecture (Bronze, Silver, Gold layers).
- Machine Learning Knowledge (Nice to Have)
- Strong foundation in machine learning is expected, including:
- Traditional Machine Learning Techniques : Classification, regression, clustering, etc.
- Model Evaluation & Metrics : Precision, recall, F1-score, ROC-AUC, etc.
- Deep Learning (DL) : Understanding of neural networks and relevant frameworks.
- Transformers & Attention Mechanisms : Knowledge of modern NLP architectures and their applications.
#J-18808-LjbffrData Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
Pioneering trusted medical solutions to improve the lives we touch: Convatec is a global medical products and technologies company, focused on solutions for the management of chronic conditions, with leading positions in advanced wound care, ostomy care, continence care, and infusion care. With around 10,000 colleagues, we provide our products and services in almost 100 countries, united by a promise to be forever caring. Our solutions provide a range of benefits, from infection prevention and protection of at-risk skin to improved patient outcomes and reduced care costs. Convatec’s revenues in 2023 were over $2 billion. The company is a constituent of the FTSE 100 Index (LSE:CTEC). To learn more about Convatec, please visit
Are you a seasoned Data & BI Engineer looking for a role where you can truly make an impact? Do you thrive on transforming complex data into actionable insights and building robust data platforms? We're on a transformative journey to modernize our global data platform, and we're seeking a proactive and technically skilled professional with 5-7 years of experience to join our team.
This is a unique opportunity to contribute hands-on across the entire data engineering, modeling, and reporting lifecycle. You'll start with foundational work like migrating from legacy systems to Snowflake , and progressively deepen your skills as you help us build advanced architectures in Microsoft Fabric, Data Mesh, and SAP Analytics Cloud . If you're eager to learn, grow, and shape the future of our data landscape, we want to hear from you!
Main Responsibilities:
Support the migration fromSQL Server, AAS, and SSIS/DataStage to Snowflake.
Build scalable and reusable data pipelines using dbt , Databricks, Fivetran , and HVR.
Recreate analytical models using Power BI and begin exploring Microsoft Fabric through POCs.
Integrate data from sources including such as SAP, CRM, and SharePoint.
Document metadata and support the implementation of Microsoft Purview.
Develop and deploy Fabric-compatible data models and pipelines using OneLake .
Contribute to the decommissioning of Snowflake and full transition to Microsoft Fabric.
Support the rollout of SAP Analytics Cloud and Group Reporting.
Participate in the implementation of Data Mesh principles, including data domain ownership and stewardship.
Apply CI/CD practices (GitHub) and contribute to automated testing and data quality controls.
Key Requirements:
4-7 years of experience as a Data Engineer, BI Developer or related roles.
Bachelor’s Degree in Computer Science , Information Systems , Software Engineering , Data Science , Mathematics or Statistics , Engineering ( with a strong data / technology focus ).
Hands-on experience with at least one cloud data platform (e.g., Azure SQL, Snowflake, Databricks ).
Fluency in English is required for effective communication within our global team and with diverse stakeholders.
Proficient in SQL and data modelling (dimensional/star schemas).
Working knowledge of ETL/ELT tools like dbt , SSIS, or DataStage .
Experience creating reports and semantic models in Power BI.
Familiarity with Git-based version control and basic CI/CD workflows.
Eagerness to learn and grow into Microsoft Fabric, Data Mesh, and modern cloud-native data architectures.
Strong collaboration and communication skills; ability to work with business and IT stakeholders.
Detail-oriented with a commitment to data quality and process excellence
Good to Have:
Exposure to Microsoft Fabric, OneLake , or SAP Analytics Cloud is a plus (or strong motivation to learn).
Experience with metadata management tools like Microsoft Purview is an advantage.
Evidence of training or Certification (s) in: Microsoft Certified: Azure Data Engineer Associate / dbt Fundamentals Certification / Snowflake SnowPro Core Certification / Databricks Lakehouse Fundamentals / Power BI Data Analyst Associate (PL-300) / GitHub Actions or CI/CD tooling certifications.
Beware of scams online or from individuals claiming to represent Convatec
A formal recruitment process is required for all our opportunities prior to any offer of employment. This will include an interview confirmed by an official Convatec email address.
If you receive a suspicious approach over social media, text message, email or phone call about recruitment at Convatec, do not disclose any personal information or pay any fees whatsoever. If you’re unsure, please contact us at .
Equal opportunities
Convatec provides equal employment opportunities for all current employees and applicants for employment. This policy means that no one will be discriminated against because of race, religion, creed, color, national origin, nationality, citizenship, ancestry, sex, age, marital status, physical or mental disability, affectional or sexual orientation, gender identity, military or veteran status, genetic predisposing characteristics or any other basis prohibited by law.
Notice to Agency and Search Firm Representatives
Convatec is not accepting unsolicited resumes from agencies and/or search firms for this job posting. Resumes submitted to any Convatec employee by a third party agency and/or search firm without a valid written and signed search agreement, will become the sole property of Convatec. No fee will be paid if a candidate is hired for this position as a result of an unsolicited agency or search firm referral. Thank you.
Already a Convatec employee?
If you are an active employee at Convatec, please do not apply here. Go to the Career Worklet on your Workday home page and View "Convatec Internal Career Site - Find Jobs". Thank you!
#J-18808-LjbffrData Engineer
Publicado hace 6 días
Trabajo visto
Descripción Del Trabajo
2 days ago Be among the first 25 applicants
Join the Data Revolution: Where Numbers Fuel Financial Innovation!
Engage with the dynamic convergence of finance and innovation at LevelUp Finance, our esteemed client. Here, they cultivate an atmosphere where professional advancement seamlessly intertwines with personal fulfillment. Envision a workplace where the concept of work-life balance transcends rhetoric, becoming an everyday reality, as sustainability, self-care, and contentment permeate every aspect of the organizational ethos.
At LevelUp Finance, each team member is cherished for their distinctive abilities and contributions. Within this inclusive and encouraging environment, individuals flourish amidst a culture of cooperation and inventive thinking. Picture yourself as a pivotal figure in a team reshaping the very foundations of finance, where your insights and skills mold the future landscape of the industry.
Joining LevelUp Finance offers not only the chance to engage with cutting-edge technologies and industry trailblazers but also to leave a lasting mark on the financial world. It's an opportunity to relish the harmonious blend of professional growth and personal satisfaction that defines the essence of LevelUp Finance.
Job Description
As a Data Engineer , your expertise in finance, FP&A, data analytics, or business intelligence is in high demand at LevelUp Finance. Join our Strategic Finance & Business Intelligence consulting practice, where your technical prowess will drive innovative solutions. Work alongside industry leaders, shaping the future of finance through cutting-edge data engineering.
Quick Job Glance:
Shift: Monday to Friday | 9 AM to 6 PM CO East
Why You'll Love Working Here: Perks and More
- 5 days work week
- 20 vacation days in total
- Fully-customized Emapta laptop and peripherals
- Indefinite term type contract
- Direct exposure to our clients
- Diverse and supportive work environment
- Free upskilling through Emapta Academy courses (Want to know more? Visit )
What We Seek: Your Expertise and Experience
- Bachelor’s degree in Computer Science, Finance, Engineering, Mathematics, or a related field
- Proven experience as a Data Engineer working with financial datasets or within a finance organization
- Proficiency in Python and SQL for data transformation and automation
- Hands-on experience with modern data warehouses (e.g., Snowflake, Redshift, BigQuery)
- Familiarity with business systems such as NetSuite, Salesforce, or similar
- Experience with data pipeline orchestration tools (e.g., Airflow, dbt)
- Knowledge of financial metrics, GAAP reporting, or subscription-based business models (ARR, CAC, LTV, etc.)
- Strong understanding of data privacy and security in regulated environments
- Experience in a fintech, asset management, or private equity-backed company
- Familiarity with Data Visualization tools such as Power BI, Looker, or Tableau
- Exposure to Snowflake-specific features like roles, shares, and tasks
- Experience with source control (e.g., Git), CI/CD pipelines, and cloud platforms (e.g., AWS, GCP)
Role Rundown: Key Duties and Tasks
- Design, build, and maintain ETL/ELT pipelines to ingest, transform, and consolidate data from systems such as ERP (e.g., NetSuite), CRM (e.g., Salesforce) and internal databases
- Develop data models to support financial reporting, FP&A, regulatory filings, and performance metrics
- Work with structured and semi-structured financial data (e.g., transactional records, P&L statements, KPIs, ARR/MRR data, market data feeds)
- Ensure high data quality and consistency across systems, identifying and resolving data anomalies or discrepancies
- Deliver solutions for external clients to automate recurring reporting processes
- Monitor and optimize the performance and cost-effectiveness of data pipelines and warehouse usage
- Maintain clear documentation of data lineage, transformation logic, and business definitions
- Support ad-hoc data requests from finance leadership, auditors, and regulators
Your Future Team at Emapta Latam
Join Emapta Latam and contribute to our legacy of transforming global outsourcing. Since 2010, Emapta has pioneered personalized outsourcing solutions, empowering businesses to thrive with bespoke teams and seamless integration. Our commitment to excellence is reflected in our state-of-the-art facilities, competitive compensation, and a supportive work environment that fosters professional growth. With over 1,000 clients worldwide and a team of over 10,000 talented professionals , Emapta continues to set new standards in the industry. Apply now to be part of our success story in Colombia, where your skills are valued, and your career ambitions are supported.
#EmaptaEra
Seniority level- Seniority level Associate
- Employment type Full-time
- Job function Finance
- Industries Outsourcing and Offshoring Consulting
Referrals increase your chances of interviewing at Emapta Global by 2x
Sign in to set job alerts for “Data Engineer” roles.Bogota, D.C., Capital District, Colombia 1 month ago
Bogota, D.C., Capital District, Colombia 2 months ago
Bogota, D.C., Capital District, Colombia 3 weeks ago
Bogota, D.C., Capital District, Colombia 2 weeks ago
Bogota, D.C., Capital District, Colombia 3 weeks ago
Bogota, D.C., Capital District, Colombia 1 day ago
Bogota, D.C., Capital District, Colombia 3 months ago
Bogota, D.C., Capital District, Colombia 1 day ago
Bogota, D.C., Capital District, Colombia 2 weeks ago
Bogota, D.C., Capital District, Colombia 2 months ago
Bogota, D.C., Capital District, Colombia 2 days ago
Bogota, D.C., Capital District, Colombia 4 weeks ago
Bogota, D.C., Capital District, Colombia 3 weeks ago
Bogota, D.C., Capital District, Colombia 1 week ago
Bogota, D.C., Capital District, Colombia 3 weeks ago
Bogota, D.C., Capital District, Colombia 6 days ago
Bogota, D.C., Capital District, Colombia 5 days ago
Bogota, D.C., Capital District, Colombia 5 days ago
Bogota, D.C., Capital District, Colombia 1 day ago
Bogota, D.C., Capital District, Colombia 1 month ago
Bogota, D.C., Capital District, Colombia 1 month ago
Bogota, D.C., Capital District, Colombia 1 week ago
Trainee Software Engineer (Colombia, Remote)Bogota, D.C., Capital District, Colombia 2 months ago
Python Software Engineer - (Colombia, Remote)Bogota, D.C., Capital District, Colombia 6 months ago
Bogota, D.C., Capital District, Colombia COP3,000,000.00-COP3,300,000.00 1 month ago
Bogota, D.C., Capital District, Colombia 2 weeks ago
Full Stack Engineer- Node.js, React,js and FirebaseBogota, D.C., Capital District, Colombia 1 month ago
Bogota, D.C., Capital District, Colombia 2 weeks ago
Bogota, D.C., Capital District, Colombia 1 month ago
Bogota, D.C., Capital District, Colombia 2 weeks ago
Bogota, D.C., Capital District, Colombia 6 months ago
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
#J-18808-LjbffrSé el primero en saberlo
Acerca de lo último Data engineer Empleos en Colombia !
Data Engineer
Publicado hace 8 días
Trabajo visto
Descripción Del Trabajo
Data Engineer - Build Scalable Data Infrastructure
At Nearshore Business Solutions, we work with companies across industries that rely on data to make critical decisions. We're building a talent pool of experienced data engineers so we can connect you with the right opportunity as soon as it becomes available. If you're focused on building scalable pipelines, ensuring data quality, and working with modern data infrastructure, we’d like to hear from you.
What You’ll Do (in future roles)
- Design and maintain scalable data pipelines for batch and real-time processing
- Develop ETL workflows to transform and consolidate data from multiple sources
- Integrate APIs, databases, and cloud services into unified data systems
- Collaborate with analysts, data scientists, and engineers to ensure reliable data delivery
- Implement data quality checks and monitoring frameworks
- Ensure compliance with data governance and security requirements
- Contribute to improvements in performance, reliability, and scalability
- Support data modeling and architecture decisions
What You’ll Bring
- 3+ years of experience in data engineering
- Proficiency in Python, Scala, or Java for data pipeline development
- Experience with ETL tools and orchestration frameworks (e.g., Airflow, DBT)
- Strong SQL skills and familiarity with both relational and NoSQL databases
- Experience with cloud data platforms (e.g., AWS, GCP, Azure)
- Familiarity with distributed processing frameworks (e.g., Spark)
- Solid understanding of data security and compliance requirements
- Strong attention to detail and problem-solving skills
Preferred Experience
- Familiarity with Kafka or similar streaming tools
- Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery
- Knowledge of Docker, Kubernetes, or related container technologies
- Exposure to data lakehouse architectures or modern data platforms
Why Join Our Talent Pool
- Get early access to new remote and hybrid roles across Latin America and the U.S.
- Be matched to positions based on your technical skills and career goals
- Work with a recruiting team that understands modern data engineering requirements
Apply now to stay informed about upcoming roles that match your experience and expertise.
#J-18808-LjbffrData Engineer
Publicado hace 10 días
Trabajo visto
Descripción Del Trabajo
Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers—amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility—our people are energized problem solvers who take pride in how the work we do changes the world for the better.
We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you, we would love to have you join us!
Job DescriptionUse predictive modeling, statistics, trend analysis, and other data analysis techniques to identify the right data to be analyzed from internal and external sources, then construct software systems and algorithms to explain or predict customer behavior and solve a variety of business problems. Assist business analysts with finding patterns and relationships in data. Build predictive models using large-scale data, test the model on results outside of the sample size, and verify the model in the real world through relational database structures, research methods, sampling techniques, and system testing.
You will report to the Director of Commercial Innovation and Design.
Responsibilities:- Design, code, and test new data management solutions, including supporting applications and interfaces.
- Support development activities in multiple DA&I and Connected Enterprise related projects for internal and external customers.
- Collaborate with other engineers and architects to plan and maintain the architecture.
- Mentor and lead DA&I engineers through the development lifecycle.
- Improve the DevOps pipeline deployment model.
- Write high-quality, regulation-compliant code.
- Collaborate with business systems analysts and product owners to define requirements.
- Support backlog management in JIRA to maintain stories.
- Lead a small team of analysts and engineers for new feature development.
- Create test and data validation logic, frameworks, and code.
- Troubleshoot reported issues and bugs, create fixes, conduct Root Cause Analysis, and propose permanent solutions.
- Data engineering with Databricks (Python, PySpark, Spark SQL): 5+ years
- CI/CD with Azure DevOps pipeline: 2+ years
- Azure Data Factory for job orchestration: 3+ years
- Requirement & Business use case gathering: 3-5 years
- Meta data & Data Governance: 5+ years
- High-level Design, Data Modeling: 3+ years
Skills in Python, SQL, PySpark, Data Warehousing, Azure Stack (Databricks, Data Factory, DevOps). Power BI is optional but a plus.
Hours: Provide daily support from 7:00am to 4:30pm CST or 12:00pm to 9:00pm IST.
What We Offer:- Comprehensive mindfulness programs with a premium membership to Calm.
- Volunteer Paid Time off available after 6 months of employment for eligible employees.
- Company volunteer and donation matching program — your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation.
- Employee Assistance Program.
- Personalized wellbeing programs through our OnTrack program.
- On-demand digital course library for professional development, among other benefits.
#LI-PT2
#LI-Hybrid
Rockwell Automation’s hybrid policy expects employees to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office.
#J-18808-LjbffrData Engineer
Publicado hace 12 días
Trabajo visto
Descripción Del Trabajo
Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees, we are committed to making a positive impact on the world through our work with customers in various vital industries.
We welcome makers, forward thinkers, and problem solvers seeking a place to do their best work. If that’s you, we would love to have you join us!
Position OverviewUse predictive modeling, statistics, trend analysis, and other data analysis techniques to identify relevant data from internal and external sources. Develop software systems and algorithms to explain or predict customer behavior and solve business problems. Assist analysts in finding patterns in data, build predictive models, and validate these models through real-world testing.
You will report to the Director of Commercial Innovation and Design.
Responsibilities- Design, code, and test new data management solutions, including supporting applications and interfaces.
- Support cross-functional development activities across multiple projects related to DA&I and Connected Enterprise, for both internal and external customers.
- Collaborate with engineers and architects to plan and update system architecture.
- Mentor and lead DA&I engineers through the development lifecycle.
- Manage and improve the DevOps deployment pipeline.
- Write high-quality, compliant code.
- Work with business analysts and product owners to define requirements.
- Support backlog management in JIRA.
- Lead a team of analysts and engineers for new feature development.
- Create testing and data validation frameworks and code.
- Troubleshoot issues, perform root cause analysis, and implement permanent fixes.
- Data engineering with Databricks (Python, Pyspark, Spark SQL): 5+ years
- CI/CD with Azure DevOps pipeline: 2+ years
- Azure Data Factory for orchestration: 3+ years
- Requirement gathering and business use case analysis: 3-5 years
- Metadata and Data Governance: 5+ years
- High-level Design and Data Modeling: 3+ years
Proficiency in Python, SQL, PySpark, Data Warehousing, and Azure Stack (Databricks, Data Factory, DevOps). Power BI skills are a plus.
Work HoursSupport available from 7:00am to 4:30pm CST or 12:00pm to 9:00pm IST.
What We OfferOur benefits include:
- Mindfulness programs with Calm membership
- Volunteer Paid Time Off after 6 months
- Charitable donation matching
- Employee Assistance Program
- Wellbeing programs via OnTrack
- Access to a digital professional development library
We value diversity and inclusion, providing growth opportunities for all talents and celebrating individuality. Join us and help change the world.
#LI-EV1
#LI-Hybrid
Our hybrid policy expects employees to work at a Rockwell location at least on Mondays, Tuesdays, and Thursdays, unless otherwise required.
#J-18808-Ljbffr