Work Location: 上海市闵行区紫竹科学园(公司免费班车/地铁 5 号线)
Do you love the challenges of working with systems that host petabytes of data and many tens of thousands of cores? To build platforms that analyze thesedata using next generations ofanalytics, machine learning & deep learningtechnologies (e.g., Spark, BigDL, TensorFlow, Keras, PyTorcch, Flink, Docker,etc.)? Working atthese problems requires solving many technical challenges inthe areas of computer architecture, operating systems, file system, datastorage, database,network, distributed computing, data analytics, machinelearning and deeplearning.
We are looking for great interns who have a wealth of passion with complexsoftware systems, distributed systems, and/or machine learning. You will designand develop models, pipelines and solutions using big data analytics, machinelearning and deep learning, as well as build optimized layers, architectures,models and learning pipelines for customer's use cases.
- Bachelor, Master, or PhD degree in Computer Science or similar technical discipline (or equivalent)
-A solid foundation in computer science, with strong competencies in computer system internals, data structures, algorithms, and software design
-Experience with machine learning and deep learning technologies is a plus
-Experience with large-scale, distributed data processing frameworks (e.g., Spark, Kafka, YARN, Tachyon, Mesos, etc.) is a plus
- Fluency in English (reading and writing)