App hivetrabajos
convert details from a website/PDF files to creat a DIY plan for creating a Beehive. want 3D images made of the timber would like the images to be made to look like the attachment. the plans can be found here: with liks to some PDF will be 8-12 pages long when finished want everything 100% done with these details i provided.
...and 2 5) The list of years and number of movies released each year 6) The number of movies that have a runtime of two hours Steps to follow: 1. Create a table in RDBMS (MySql, MSsql, Oracle) and load the data in table (usign bulk insert). 2. Ingest the data using Sqoop to HDFS locaton 3. Create a Hive External Table 4. Read External Table using PySpark Session 5. Perform the Spark POC query and Save the file in Parquet data formate 6. After save the file again create a External table in hive and load the parquet data. 7. Optional Create a BI report using (Tablue, PowerBI and Kibana) Note I'm shareing the bulk inset query for your refernce (MSSQL) create table customers ( Customer_id int, Cust_name varchar(100), City varchar(20), Grade nvarchar(10), Salesman_i...
Need bigdata and Hadoop tools some them like spark sql, Hadoop, hive and databricks , data lakes
Looking for someone who can help in a very small project using HIVE and Advanced HIVE tools ( Big data management tool ).
As part of this project, role would be developer and must know sqoop,hive,hdfs,,pyspark,pig. Regular story development includes above skill.
Hi Nassereddine A., you were in a contest of mine last month. Your design was very similar to the one that I awarded. but then when i awarded a winner they never turned over the file to me. I have a photo of it. could I pay you to recreate the logo and turn it into a usable file for me? I will send you a photo of it if so.
Hey I am trying to get the file from you. I selected your logo and chatted you for a change to the word "basketball". when can i get my file from you?
Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure
Someone who had experience with Spark, Hadoop, Hive, Kafka Processing with Azure
Looking for Python and Scala expert, Candidate should have knowledge in Big data domains such as Hadoop, spark, hive, etc. Knowledge of Azure Cloud is a plus. Share your CV.
Reading xml and xpt files using spark scala and writing the output in form of an hive table. currently facing small issues while running the project, I believe issue might be small which I am not able to see.
Need Help !!! Build an application with spark Scala, I need to write lot of SQL and HiveQL query to extract the data from different source like Mysql to Hive table Also I need to extract data to Azure platform. I write the sql query a tools name Qubol. This job is full time! Please knock serious people only who can support me for full-time to this project . Further discussion share your contact as well !
PLEASE READ. Our company, ‘The Hive Collaborative’, has our original logo (which we love) but we recently remodeled our stage and created a cool alternate logo design on the wall. We’re looking for someone to take our design and make it into a workable logo. Basically keeping it as the picture in logo form, but can be stylized. The image does not need our name per say. We like this alternate design because it utilities our buildings colors which are a huge part of our brand. I’ll post pictures of our original logo, colors and the new wall design below. If you have any questions let me know! Thanks!
I'd like someone to style the current Property Hive Search Form to match my design. The font style on the placeholders should be Montserrat (300 weight, 18px font size). The Font style on the search button should be Monstserrat bold 18px uppercase.
I need a creative designing freelancer to work on this project for my company
We have a video for video hive, and it needs to be edited and need a voiceover on it.
programming language python -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins
programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins, sonarqube
I'm looking for some one whos expertise in pyspark data stratification, I have pseudo code available and from the data set, I'm looking to remove duplicates from post strata. Here's is sample set of data I have created a bin field based on agg_readings. And the Data is so huge with close to 320 Million records stored in hive with parquet format. Of the 320Million, I'm looking to get 5 Million based on stratification. Below is the sample snippet I have used sampleBy here to fetch the stratified based on two columns. ( Columns are - mnth_src_fld & bin). All I'm looking at the stratified data piece is to get gen_rnd_id unique values across the entire data post stratification, But unfortunately I'm not getting unique gen_rnd_id's. For instance, h...
programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka -Tools that are needed to be familiar with Bit bucket, Tectia(edgenode), sql developer, oozie, Git, Jenkins, sonarqube
Im looking for a experienced person who can work on Python (Advanced level), Cloud Infrastructure as code (Terraform on AWS ), Codebuild, Kubernetes and docker., Pyspark, SQL, AWS (EMR, S3, Glue, Hive EC2), Airflow. Im looking for person who can work 4 hour a day at EST time zone for long term upto 1 year Monday to friday. Pay will be 45k to 60k Per month
Using Onshape program and screenshotting at each step. Design a small bee hive with drip tray for honey on the bottom. Similar to attached image but add tray to the bottom.
We are a new business called Hive. Our colors are black and gold. We are inspired by the beehive hexagon, and feminine energy. Looking for a cool, clean, effective logo. Hexagons only please. Would love to see a simple and cool crown inside of a hexagon above the word Hive. Would also love a small crown as the dot above the i in the word Hive. I love the hexagon shape. No circles, No squares, No bees.
I am looking for some one who is good in SQL ,Python ,AWS and Spark.
Experience with Big Data and the Cloud platform services: Apache Hadoop, Apache Hive. Hive Tez migration experience and optimization for Hive code. Basic Knowledge in ETL, Data Pipelines using Python, Shell Scripts, Ctrl-M, Apache Airflow; Building & Populating Data Warehouses, and Querying with BI tools. Basic Knowledge in RDBMS fundamentals: Design & Creation of Databases, Schemas, Tables; DB Administration, Security & working with MySQL & IBM Db2. Basic Knowledge in SQL query language, database functions, stored procs, working with multiple tables, joins, & transactions. Should be able to work Atleast 6 hrs
Require a Big Data Engineer with min 4 years of experience in Big Data and ML. Hands-on experience is a must on Big Data Architecture, ETL and ML Must have at least one complete project cycle experience ie from Initiation to Delivery Must have experience working with min 100 TB Data Lake. Exposure to NOSQL technologies and Hadoop-based analytics is preferred. Hands-on exp is required in Hadoop, Pig, Hive, Storm, Kafka, Spark, and other technologies. This is a permanent work-from-home opportunity with regular Indian business timings. Must be very fluent in English speaking and comprehension as the clients are American. Only immediate joiners with max 15 days notice period need apply. Applicants must provide evidence of previous work done in GitHub, BitBucket along with the applicatio...
Any one having the knowledge of following technologies can Bid: -programming language Scala must, python as well -hands on experience on spark -hands on experience on Hadoop ecosystem , hive, sqoop, sql queries, Unix -cloud experience on cloudera or AWS -oozie workflow -experienced on creating cicd pipelines -Unit/Junit testing, integration or end to end testing -kafka
Require a Big Data Engineer with min 4 years of experience in Big Data and ML. Hands-on experience is a must on Big Data Architecture, ETL and ML Must have at least one complete project cycle experience ie from Initiation to Delivery Must have experience working with min 100 TB Data Lake. Exposure to NOSQL technologies and Hadoop-based analytics is preferred. Hands-on exp is required in Hadoop, Pig, Hive, Storm, Kafka, Spark, and other technologies. This is a permanent work-from-home with regular Indian business timings. Must be very fluent in English speaking and comprehension as the clients are American. Only immediate joiners with max 15 days notice period need apply. Application without a detailed resume along with evidence of previous work in GitHub, BitBucket etc will be summa...
Logo designed for T&J Farms. We have a pumpkin patch and sell honey so it must incorporate pumpkins and bees/bee hive. I have attached an example of something I threw together to give some ideas of what I'm looking for.
Hi Swapnil Could you please quote our rate for 393 images clea rcut onto transparent (with clear white) background same as before. Thanks - Mark
...Solution: Started in 2015 Studymedi is a 1st of its kind Medical Education Platform that provides details about various Indian and abroad Universities and also helps in end-to-end admission guidance, VISA, Accommodation etc. with: Head office in the USA (Seattle): 500 Yale Ave N, Seattle, WA 98109, United States Canada (Toronto) - 100 King Street West, Toronto, ON M5X 1C9 India (Chennai) - The Hive L3 VR Chennai, Anna Nagar West, Anna Nagar, Chennai, Tamil Nadu 600040, India Kindly let me know if you would be willing to take this up. I need website SEO-based content + design + development. I need chatbot integration with WhatsApp, Instagram and messenger API integration as well. The purpose of this site is to educate students and parents and to generate the maximum amount...
I am looking for someone who has experience with Hadoop, Google Cloud & BigQuery, HDFS, Sqoop, Hive, Python, and Linux. It will be a long-term project with at least 4-5 hours of support required per day. Please only reach out to me if you have good experience with the languages and systems listed above.
We are looking for Hadoop developers with Hive, scala and spark
We are hiring hadoop developers good at Hive, spark, scala.
We offer many digital services that will help boost your Business with affordable prices 1. Website Development 2. Website Manager 3. Search Engine Optimization 4. Social Media Management 5. Branding 6. Portals 7. Software & Apps About Us Hive Digital Solutions is a digital marketing agency based in the UAE. Made by Emirati talent that focuses on empowering brands and making them unique and different from the market. With our data-rich, performance-driven approach to digital marketing, we work with brands of all types, from fast-growth startups to international organization. Kindly let us know what services you want us to assist you with and we will guarantee you great results.
...for the below opening with our esteemed client. Hiring for "Spark developer role with US IT based Required Experience: 4+ years atleast two years of spark experience Salary : 40 lacs. Required Qualification: B.E/ Location- Hyderbad | Remote till covid end Timmings : 6:30PM-2:00AM IST Job Description: 1) 4-5 years of Hands-on experience on components of Hadoop Ecosystem like HDFS, Hive, Spark, Sqoop, Map Reduce and YARN 2) Handson Experience with data integration projects on AWS and database platforms (RedShift, Athena, Aurora). 3) Experience In AWS service like S3,IAM,EMR,EC2,AWS Glue, EMR 4) Experience in Azure service 5) Experience in Kafka,and steaming processes and real time jobs. 6) Experience work with cliets and requirement gatering in agile way. 7) Experie...
My name is Margaret. I am looking for someone who can create a well-designed mobile native app. I would like you to create a social health network app where all supportive groups of people can connect, relate, cope and interact in full confidentiality to feel less lonely and improve their health as the mission. What do you think about it? Why? To be the best fit for this project you need: • Ability to communicate clearly • Lightning-fast turnaround times • Combination of a creative and tech expert (high-performer that can create a delightful UX/UI) • Attention to details • Willingness to sign an NDA What you need to have: • Unrivaled back-end, front-end, security, and database expertise • Broad knowledge of architecture, web technologie...
We have opportunities for qualified Data Engineering Specialist to work for a leading telecommunications company in Sydney, Australia. We're seeking an experienced data engineering specialist who has skills and experience in implementing large scale big data platforms. Our big data / data analytics technologies: Postgresql, Apache Spark, Apache Kafka, Apache HDFS, Apache Nifi, Apache Hive, Apache Flink, Apache Druid, Scala and Python Programming language, Kubernetes(K8s)/Rancher/Docker, ELK, Internet of Things, Data Science, AI and ML Platforms
...challenges and concerns into a solution oriented discussion. Must Have: ● At least 6+ years of total IT experience ● At least 4+ years of experience in design and development using Hadoop technology stack and programming languages ● Hands-on experience in 2 or more areas: o Hadoop, HDFS, MR o Spark Streaming, Spark SQL, Spark ML o Kafka/Flume. o Apache NiFi o Worked with Hortonworks Data Platform o Hive / Pig / Sqoop o NoSQL Databases HBase/Cassandra/Neo4j/MongoDB o Visualisation & Reporting frameworks like D3.js, Zeppellin, Grafana, Kibana Tableau, Pentaho o Scrapy for crawling websites o Good to have knowledge of Elastic Search o Good to have understanding of Google Analytics data streaming. o Data security (Kerberos/Open LDAP/Knox/Ranger) ● Should have a very good overview...
I have all the code just execute script and share results to me in apache pyspark, hive environment.
...Modelling Techniques & Data Engineering Concepts is a must Hands on experience in ETL process, Performance optimization techniques is a must. Candidate should have taken part in Architecture design and discussion. Minimum of 2 years of experience in working with batch processing/ real-time systems using various technologies like Databricks, HDFS, Redshift, Hadoop, Elastic MapReduce on AWS, Apache Spark, Hive/Impala and HDFS, Pig, Kafka, Kinesis, Elasticsearch and NoSQL databases Minimum of 2 years of experience working in Datawarehouse or Data Lake Projects in a role beyond just Data consumption. Minimum of 2 years of extensive working knowledge in AWS building scalable solutions. Equivalent level of experience in Azure or Google Cloud is also acceptable Minimum of 2 years of...
you are going to do a Hive query to create a table called used_cars from data. Use a schema that is appropriate for the column headings. The date column for the dataset is not in a format that is recognized by Apache Hive by default (you cannot use DATE type to load this dataset). Look up methods for loading the date from this dataset correctly into Hive. 2. Write Hive queries to see how many missing values you have in each attribute. Create a bar chart displaying the number of missing records by column name. 3. Group the price column and count the number of unique prices. Create a bar chart with name with price and number of times it occurs for the top 10 unique prices. Do you notice if there is a single price that is repeating across the ads? Take note of thi...
I need a logo for a brand new parking that we are starting close to an airport in Spain. So basically is a parking where you go, park your car and a shuttle service will bring you to the airport. Please bear in mind that 'Colmena' means hive (you can search for it in Google and see the pictures). So I would expect something related to that.
Decision System, SQL Workbench, Hive, SQL, Amazon EMR,
Data Engineers 6+ yrs : At least 6+ years of total IT experience ● At least 4+ years of experience in design and development using Hadoop technology stack and programming languages ● Hands-on experience in 2 or more areas: o Hadoop, HDFS, MR o Spark Streaming, Spark SQL, Spark ML o Kafka/Flume. o Apache NiFi Worked with Hortonworks Data Platform o Hive / Pig / Sqoop o NoSQL Databases HBase/Cassandra/Neo4j/MongoDB o Visualisation & Reporting frameworks like D3.js, Zeppellin, Grafana, Kibana Tableau, Pentaho o Scrapy for crawling websites o Good to have knowledge of Elastic Search o Good to have understanding of Google Analytics data streaming. o Data security (Kerberos/Open LDAP/Knox/Ranger) ● Should have a very good overview of the current landscape and ability t...
1. Build a wrapper 2. Wrapper should have a python file. 3. That python file should ask us arguments for input for file names then pick the file from location and then load csv file to hive table, then after loading the csv file to hive, read from hive table and write into another table. Convert csv to parquet and compare storage sizes of the table. 4, Do the same for AVRO formats as well.
The project duration is 3-4 days. It involves creating a limited number of interview questions for the subject areas. We need freelancers in all the following areas: 1. AWS, AZURE & GCP 2. SPARK, Hive, Pig 3. PYTHON & R Programming 4. SCALA 5. Shell Script/Bas, Kafka, Storm, Rabbit MQ 6. AI & ML 7. DevOps Engineer 8. Cyber Security 9. SQL 10. Data Scientist
The project duration is 3-4 days. It involves creating a limited number of interview questions for the subject areas. We need freelancers in all the following areas: 1. AWS, AZURE & GCP 2. SPARK, Hive, Pig 3. PYTHON & R Programming 4. SCALA 5. Shell Script/Bas, Kafka, Storm, Rabbit MQ 6. AI & ML 7. DevOps Engineer 8. Cyber Security 9. SQL 10. Data Scientist
I need to pull data from Hive and apply some logic and insert it back into Hive by creating a new table.
...I'm really looking for a tech co-founder, but if a contractor fits I'm open to it. If you are interested in being a co-founder let me know and we can discuss details. Need to be in the U.S. Example tech stack may include: Programming languages: HTML5/CSS3 Frameworks: Node.js, React Databases: MySQL Cloud platforms: Amazon EC2, Amazon S3 Analytics: Google Mobile App Analytics, Flurry Analytics, Hadoop, Hive, MixPanel, Localytics, Mode, Parquet, Pig, Presto, Spark CDN services: Amazon CloudFront Streaming protocols: RTMP, Adobe HTTP Dynamic Streaming, Apple HTTP Live Streaming, M2TS, MPEG-DASH, Microsoft Smooth Streaming, WebRTC, RTSP/RTP Media formats: MKV, MP4, AVCHD, AVI, DMW, MOV, FLV, WMV, SWF Codecs: H.264/AVC, FFmpeg, XviD Media containers:...