For our ADA-friendly site, please click here

Data Engineer

📁
Information Technology
💼
Corporate
COMPANY OVERVIEW
For over a century, Neiman Marcus Group has served the unique needs of our discerning customers by staying true to the principles of our founders:  to be the premier omni-channel retailer of luxury and fashion merchandise dedicated to providing superior service and a distinctive shopping experience in our stores and on our websites. Neiman Marcus Group is comprised of the Specialty Retail Stores division, which includes Neiman Marcus and Bergdorf Goodman, and our international brand, mytheresa.com.  Our portfolio of brands offers the finest luxury and fashion apparel, accessories, jewelry, beauty, and home décor. The Company operates more than 40 Neiman Marcus full-line stores in the most affluent markets across the United States, including U.S. gateway cities that draw an international clientele.  In addition, we operate 2 Bergdorf Goodman stores in landmark locations on Fifth Avenue in New York City. We also operate more than 40 Last Call by Neiman Marcus off-price stores that cater to a value oriented, yet fashion minded customer.  Our upscale eCommerce and direct-to-consumer division includes NeimanMarcus.com, BergdorfGoodman.com Horchow.com, LastCall.com, and CUSP.com.  Every day each of our 15,000 NMG associates work towards the goal of enabling our customers to shop any of our brands "anytime, anywhere, and on any device." Whether the merchandise we sell, the customer service we offer, or our investments in technology, everything we do is to enhance the customer experience across all channels and brands.
 
DESCRIPTION


Data engineer will have the unique combination of business acumen needed to interface directly with key stakeholders to understand the problem along with the skills and vision to translate the need into a world-class technical solution using the latest technologies

 

This person will be a hands-on role who is responsible for building data engineering solutions for NMG Enterprise using cloud-based data platform. Data engineer will provide day-to-day technical deliverables and participate in technical design, development and support for data engineering workloads. In this role, you need to be equally skilled with the whiteboard and the keyboard.


JOB DUTIES

  • Design, develop, implement, test, document, and operate large-scale, high-volume, high-performance data structures for analytics.
  • Develop robust and automated pipelines to ingest and process structured and unstructured data from source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset
  • Implement automation to optimize data platform compute and storage resources
  • Implement data ingestion routines both real time and batch using best practices in data modeling, ETL/ELT processes leveraging AWS technologies and Big data tools.
  • Develop and enhance end to end monitoring capability of cloud data platforms
  • Gather business and functional requirements and translate these requirements into robust, scalable, operable solutions that work well within the overall data architecture.
  • Analyze source data systems and drive best practices in source teams.
  • Participate in the full development life cycle, end-to-end, from design, implementation and testing, to documentation, delivery, support, and maintenance.
  • Produce comprehensive, usable data set documentation and metadata.
  • Continuously integrate and ship code into our cloud production environments
  • Evaluate and make decisions around data set implementations designed and proposed by peer data engineers. Evaluate and make decisions around the use of new or existing software products and tools.
  • Mentor junior data engineers

JOB REQUIREMENTS
  • BS in Computer Science or related field
  • 4+ years of experience in the data and analytics space
  • Certification –preferably AWS Certified Big Data or any other cloud data platforms, big data platforms
  • 2+ years of experience developing and implementing enterprise-level data solutions utilizing Python, Java, Spark, and Scala, Airflow, Hive
  • 2+ years in key aspects of software engineering such as parallel data processing, data flows, REST APIs, JSON and micro service architectures
  • 2+ year of experience working on Big Data Processing Frameworks and Tools – Map Reduce, YARN, Hive, Pig, Oozie, Sqoop, and good knowledge of common big data file formats (e.g., Parquet, ORC, etc.)
  • 4+ years of RDBMS concepts with strong data analysis and SQL experience
  • 3+ years of Linux OS command line tools and bash scripting proficiency
  • Solid programing experience in Python - needs to be an expert in this 4/5 level.
 
 

Knowledge, Skills and Abilities: 

  • A passion for technology and data analytics with a strong desire to constantly be learning and honing skills
  • Ability to work in a team environment
  • Flexibility to work in matrix reporting structure
  • Strong understanding of Hadoop fundamentals with experience working on Big Data Processing Frameworks and Tools – Map Reduce, YARN, Hive, Pig, Oozie, Sqoop, and good knowledge of common big data file formats (e.g., Parquet, ORC, etc.)
  • Develop large scale event based streaming architectures
  • Strong communication and documentation skills
  • Mentor other team members and participate in cross training
  • Working knowledge of NoSQL, in-memory databases
  • Working knowledge of developing data ingestion and data transformation capabilities using Hive, Python, Spark and Scala, Airflow
  • Background in all aspects of software engineering with strong skills in parallel data processing, data flows, REST APIs, JSON, XML, and micro service architecture.
  • Solid programing experience in Python - needs to be an expert in this 4/5 level
  • Experience working in a scrum/agile environment and associated tools (Jira)
  • Experience with large data sets and associated job performance tuning and troubleshooting
  • Able to collaborate with cross-functional IT teams and global delivery teams
 
Nice to have:
  • Kubernetes and Docker experience a plus
  • Prior working experience on data science work bench
  • Cloud data warehouse experience - Snowflake is a plus
  • Data Modeling experience a plus
  • Knowledge of data engineering aspects within machine learning pipelines (e.g., train/test splitting, scoring process, etc.)

Previous Job Searches

My Profile

Create and manage profiles for future opportunities.

Go to Profile

My Submissions

Track your opportunities.

My Submissions

Similar Listings

Corporate

Irving, Texas

📁 Information Technology

Corporate

Irving, Texas

📁 Information Technology

Corporate

Irving, Texas

📁 Information Technology

Los Angeles and San Francisco Applicants: Neiman Marcus will consider for employment qualified applicants with criminal history as required by applicable law.
If you have a disability under the Americans with Disabilities Act or similar law, and you need assistance in accessing our Career Center or wish to discuss potential accommodations related to applying for employment at our Company, please contact ApplicantSupport@NeimanMarcus.com.
To listen to an audio clip of this information, click HERE.