Artificial intelligence hairdressing design

(3, 3), padding='same')(down2_pool)

34. down3 = BatchNormalization()(down3)

35. down3 = Activation('relu')(down3)

36. down3 = Conv2D(256, (3, 3), padding='same')(down3)

37. down3 = BatchNormalization()(down3)

38. down3 = Activation('relu')(down3)

39. down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)

40. # 16

41.

42. down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)

43. down4 = BatchNormalization()(down4)

44. down4 = Activation('relu')(down4)

45. down4 = Conv2D(512, (3, 3), padding='same')(down4)

46. down4 = BatchNormalization()(down4)

47. down4 = Activation('relu')(down4)

48. down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)

49. # 8

50.

51. center = Conv2D(1024, (3, 3), padding='same')(down4_pool)

52. center = BatchNormalization()(center)

53. center = Activation('relu')(center)

54. center = Conv2D(1024, (3, 3), padding='same')(center)

55. center = BatchNormalization()(center)

56. center = Activation('relu')(center)

57. # center

58.

59. up4 = UpSampling2D((2, 2))(center)

60. up4 = concatenate([down4, up4], axis=3)

61. up4 = Conv2D(512, (3, 3), padding='same')(up4)

62. up4 = BatchNormalization()(up4)

63. up4 = Activation('relu')(up4)

64. up4 = Conv2D(512, (3, 3), padding='same')(up4)

65. up4 = BatchNormalization()(up4)

66. up4 = Activation('relu')(up4)

67. up4 = Conv2D(512, (3, 3), padding='same')(up4)

68. up4 = BatchNormalization()(up4)

69. up4 = Activation('relu')(up4)

70. # 16

71.

72. up3 = UpSampling2D((2, 2))(up4)

73. up3 = concatenate([down3, up3], axis=3)

74. up3 = Conv2D(256, (3, 3), padding='same')(up3)

75. up3 = BatchNormalization()(up3)

76. up3 = Activation('relu')(up3)

77. up3 = Conv2D(256, (3, 3), padding='same')(up3)

78. up3 = BatchNormalization()(up3)

79. up3 = Activation('relu')(up3)

80. up3 = Conv2D(256, (3, 3), padding='same')(up3)

81. up3 = BatchNormalization()(up3)

82. up3 = Activation('relu')(up3)

83. # 32

84.

85. up2 = UpSampling2D((2, 2))(up3)

86. up2 = concatenate([down2, up2], axis=3)

87. up2 = Conv2D(128, (3, 3), padding='same')(up2)

88. up2 = BatchNormalization()(up2)

89. up2 = Activation('relu')(up2)

90. up2 = Conv2D(128, (3, 3), padding='same')(up2)

91. up2 = BatchNormalization()(up2)

92. up2 = Activation('relu')(up2)

93. up2 = Conv2D(128, (3, 3), padding='same')(up2)

94. up2 = BatchNormalization()(up2)

95. up2 = Activation('relu')(up2)

96. # 64

97.

98. up1 = UpSampling2D((2, 2))(up2)

99. up1 = concatenate([down1, up1], axis=3)

100. up1 = Conv2D(64, (3, 3), padding='same')(up1)

101. up1 = BatchNormalization()(up1)

102. up1 = Activation('relu')(up1)

103. up1 = Conv2D(64, (3, 3), padding='same')(up1)

104. up1 = BatchNormalization()(up1)

105. up1 = Activation('relu')(up1)

106. up1 = Conv2D(64, (3, 3), padding='same')(up1)

107. up1 = BatchNormalization()(up1)

108. up1 = Activation('relu')(up1)

109. # 128

110.

111. up0 = UpSampling2D((2, 2))(up1)

112. up0 = concatenate([down0, up0], axis=3)

113. up0 = Conv2D(32, (3, 3), padding='same')(up0)

114. up0 = BatchNormalization()(up0)

115. up0 = Activation('relu')(up0)

116. up0 = Conv2D(32, (3, 3), padding='same')(up0)

117. up0 = BatchNormalization()(up0)

118. up0 = Activation('relu')(up0)

119. up0 = Conv2D(32, (3, 3), padding='same')(up0)

120. up0 = BatchNormalization()(up0)

121. up0 = Activation('relu')(up0)

122. # 256

123.

124. classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0)

125.

126. model = Model(inputs=inputs, outputs=classify)

127.

128. #model.compile(optimizer=RMSprop(lr=0.0001), loss=bce_dice_loss, metrics=[dice_coeff])

129.

130. return model

The input is a color map of 256X256X3, and the output is MASX of 256X256X1. The training parameters are as follows:

1. model.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ["accuracy"])

2.

3. model.fit(image_train, label_train, epochs=100, verbose=1, validation_split=0.2, shuffle=True, batch_size=8)

The effect chart is as follows:

 

I am here to concentrate on the sample calibration. I use the face area as the skin color area. Therefore, the facial features area is not excluded. If you want to get the skin area that does not contain facial features, you only need to replace the corresponding sample.

With the accurate skin tone area, we can update the microdermabrasion algorithm, which gives a set of renderings:

 

 

As you can see, the traditional dermabrasion algorithm based on color space can't accurately distinguish the skin area from the skin-like area, so the skin is also dermabrasion, which leads to the loss of hair texture details, and the skin based on Unet skin segmentation. The algorithm can distinguish the skin color area such as skin and hair well, and then retain the texture details of the hair to achieve the skin grinding in the place where the microdermabrasion is not worn. The effect is obviously better than the traditional method.

At present, the mainstream companies such as Meitu Xiuxiu, Tiantian P and so on have also used the algorithm based on deep learning skin segmentation to improve the effect of microdermabrasion. Here is a brief introduction to help you better understand.

Of course, using the deep learning method to improve the traditional method is just a model, so the article title is AI beauty dermabrasion algorithm one. In AI beauty dermabrasion algorithm II, I will completely abandon the traditional method, and finish

The American industry AI on the tuyere, the robot has found a new breakthrough in the US industry.

The American industry AI on the tuyere, the robot has found a new breakthrough in the US industry.

 

Recently, the professional assistant APP launched a new function of “robot”, which uses artificial intelligence technology to help hair stylists analyze customer's face and facial features, and gives suggestions for hair styling in line with popular aesthetics. It is reported that the robot is the first company in the hairdressing industry to put artificial intelligence technology into the hairdressing scene. The combination of science and aesthetics is creative, creating a precedent for the combination of artificial intelligence and hairdressing.

We often hear some hair stylists complain that the price of the house has been turned over several times. Why does the hairdressing price increase a lot of dislike? The most fundamental reason is not that the customer thinks the price is high, but the hair stylist raises the price. The service provided to the guests did not change significantly.

With the improvement of economic income, the need for hair style is no longer the same as who and who, but needs to be personalized, and needs to be different from others. It is best that this hairstyle is only available to customers, and is customized for customers. of. “Private customization is just a matter for guests. “One person, one design” is also an inevitable trend of consumption upgrades.”

However, it is understood that there are still a large number of hair stylist partners who do not have the ability to design privately. Only a few can do "one person, one design", and many still take the three or five hair style sets they have just learned. go with. This obviously does not meet the needs of the guests.

You can't let customers pay more for your service. If you want to increase the amount of money your customers spend, the hair stylist must first upgrade the service.

“Helping every guest to order privately, so that every hair stylist will design” is the original intention of the technology to develop Xiaomei robot. Xiaomei Robot can use artificial intelligence technology to make one-on-one hair style design according to the five-instant style, occupation, age and inner preference of the guests. The hair stylist only needs to hire Xiaomei as a personal assistant to assist the hair stylist. Guests are offered a one-on-one private custom. Many hair stylist partners who have played games understand that this is equivalent to having a strong skill and equipment that can greatly improve your service ability. In this way, the hair stylist can upgrade the customer's consumption by upgrading their service capabilities.

In the past two years, the hairdressing industry as a whole has emphasized technology, emphasized customer relationship maintenance, and emphasized the direction of the consumer experience. All these changes point to the hairdressing industry to provide users with more professional and long-term services. The artificial intelligence technology in the hairdressing industry has many unique advantages compared with the traditional hairdressing industry in improving consumer experience and managing customer relationships. At present, Xiaomei Robot has completed a one-stop service for hair stylists to provide customers with personalized design from appointment to communication and interaction. As a good assistant to the hair stylist, the future Xiaomei robot will continue to deepen the vertical field of hairdressing in the capital winter, continue product optimization, and build high barriers in the industry.

Teacher Tony's artificial intelligence: I can also recommend hairstyles when I pass the time.

In order to let people spend boring time while getting a haircut, a transparent cloth was born.

 

However, playing the mobile phone with a low head for a long time, not only can not bear the cervical spine, but also affects the Tony teacher. Now, someone has moved the screen to the mirror, so that you can entertain the haircut.

 

According to reports, the mirror display has four functions: audio and video entertainment, inspection and care, beauty information and VR hairstyle library.

 

Among them, audio-visual entertainment provides interactive functions such as video, games and live broadcast. Customers can also project the pictures on their mobile phones onto the mirror screen, adding more personalized options.

 

The detection function is to achieve scalp detection by an external instrument.

In the US industry information section, Tony teachers can upload their own hair styles and showcase recent trends.

 

A closer look reveals that each mirror screen has a small camera that is ready for the VR hairstyle library.

“After capturing the user's face information through the camera, the user's virtual 3D portrait will be generated, and the hairstyle can be changed arbitrarily, the length can be adjusted, or the most suitable hairstyle can be recommended for the user according to the big data of the hairstyle library.”

Smart hair change, isn't that what the smart AR hairdressing system is doing?

 

The intelligent AR hairdressing system is supported by Paxi, which collects the user's head image in real time through an ordinary camera. At the same time, using the face recognition technology, 3D hairstyle simulation technology, and AR try-on technology, the virtual hairstyle is worn on the user's head. Real-time rendering on the screen to achieve a real 3D try-on hairstyle.

Magic hair mirror can also recommend suitable hair style according to user's face type intelligence, support hair style online micro-customization, length, curl degree, color, etc. can be debugged online, so that customers can choose what they want.

Look at the effect first, then cut the hair, black technology makes the hair cut smarter!

Teacher Kevin's artificial intelligence: one second test makeup can also teach you skin care

The next step is to introduce the "mirror" installed in the cosmetics retail store.

 

Make-up makeup is one of the main functions of the makeup mirror. At the interview site, the reporter saw that the makeup function now covers lipstick, eye shadow, blush, eyebrow pencil and other make-up products. The makeup is clear in one second. The effect is clear at a glance.

 

The reporter experienced the makeup makeup on the spot

“Makeup products have many brands and many colors. The makeup process is cumbersome and time-consuming, and the makeup mirror allows consumers to find the right product in a short period of time.”

“Pakeup Mirror” software developer Guangzhou Paxi Software Development Co., Ltd. introduced that “Makeup Mirror” brings together AR makeup technology, face recognition and convolutional neural network algorithm to provide personalized service for the makeup industry. Technical support.

In fact, the function of one-touch makeup has already appeared in apps such as beauty cameras. What is the difference between the makeup function of the makeup mirror and the camera software?

“The beauty camera is more of an entertainment function, and the makeup mirror is a practical tool to restore the product effect as accurately as possible.” The person in charge responded that the makeup mirror provides a modeling tool for the merchant to calibrate the color tone according to the actual product. .

In addition, with professional measuring instruments, makeup mirrors can also provide skin testing services to assess the moisture, oil and elasticity of the skin.

 

Makeup mirror evaluates skin moisture, oil and elasticity

The “Makeup Mirror” will provide skin care recommendations based on the results of the skin test and recommend suitable skin care products.

At present, the makeup mirror has been stationed in the Kazilan, YUKI living museum, fast beauty makeup and other offline stores, the market response is enthusiastic, set off a wave of artificial intelligence AR test makeup and AI skin quality testing.

 

In the future, all kinds of smart products such as makeup mirrors, automatic display cases, smart shop systems and other intelligent products will also be stationed in beauty stores, truly building a smart shopping environment for people, goods and fields, and empowering new retail products. .


You can't let customers pay more for your service. If you want to increase the amount of money your customers spend, the hair stylist must first upgrade the service.

“Helping every guest to order privately, so that every hair stylist will design” is the original intention of the technology to develop Xiaomei robot. Xiaomei Robot can use artificial intelligence technology to make one-on-one hair style design according to the five-instant style, occupation, age and inner preference of the guests. The hair stylist only needs to hire Xiaomei as a personal assistant to assist the hair stylist. Guests are offered a one-on-one private custom. Many hair stylist partners who have played games understand that this is equivalent to having a strong skill and equipment that can greatly improve your service ability. In this way, the hair stylist can upgrade the customer's consumption by upgrading their service capabilities.

In the past two years, the hairdressing industry as a whole has emphasized technology, emphasized customer relationship maintenance, and emphasized the direction of the consumer experience. All these changes point to the hairdressing industry to provide users with more professional and long-term services. The artificial intelligence technology in the hairdressing industry has many unique advantages compared with the traditional hairdressing industry in improving consumer experience and managing customer relationships. At present, Xiaomei Robot has completed a one-stop service for hair stylists to provide customers with personalized design from appointment to communication and interaction. As a good assistant to the hair stylist, the future Xiaomei robot will continue to deepen the vertical field of hairdressing in the capital winter, continue product optimization, and build high barriers in the industry.

Teacher Tony's artificial intelligence: I can also recommend hairstyles when I pass the time.

In order to let people spend boring time while getting a haircut, a transparent cloth was born.

 

However, playing the mobile phone with a low head for a long time, not only can not bear the cervical spine, but also affects the Tony teacher. Now, someone has moved the screen to the mirror, so that you can entertain the haircut.

 

According to reports, the mirror display has four functions: audio and video entertainment, inspection and care, beauty information and VR hairstyle library.

 

Among them, audio-visual entertainment provides interactive functions such as video, games and live broadcast. Customers can also project the pictures on their mobile phones onto the mirror screen, adding more personalized options.

 

The detection function is to achieve scalp detection by an external instrument.

In the US industry information section, Tony teachers can upload their own hair styles and showcase recent trends.

 

A closer look reveals that each mirror screen has a small camera that is ready for the VR hairstyle library.

“After capturing the user's face information through the camera, the user's virtual 3D portrait will be generated, and the hairstyle can be changed arbitrarily, the length can be adjusted, or the most suitable hairstyle can be recommended for the user according to the big data of the hairstyle library.”

Smart hair change, isn't that what the smart AR hairdressing system is doing?

 

The intelligent AR hairdressing system is supported by Paxi, which collects the user's head image in real time through an ordinary camera. At the same time, using the face recognition technology, 3D hairstyle simulation technology, and AR try-on technology, the virtual hairstyle is worn on the user's head. Real-time rendering on the screen to achieve a real 3D try-on hairstyle.

Magic hair mirror can also recommend suitable hair style according to user's face type intelligence, support hair style online micro-customization, length, curl degree, color, etc. can be debugged online, so that customers can choose what they want.

Look at the effect first, then cut the hair, black technology makes the hair cut smarter!

Teacher Kevin's artificial intelligence: one second test makeup can also teach you skin care

The next step is to introduce the "mirror" installed in the cosmetics retail store.

 

Make-up makeup is one of the main functions of the makeup mirror. At the interview site, the reporter saw that the makeup function now covers lipstick, eye shadow, blush, eyebrow pencil and other make-up products. The makeup is clear in one second. The effect is clear at a glance.

 

The reporter experienced the makeup makeup on the spot

“Makeup products have many brands and many colors. The makeup process is cumbersome and time-consuming, and the makeup mirror allows consumers to find the right product in a short period of time.”

“Pakeup Mirror” software developer Guangzhou Paxi Software Development Co., Ltd. introduced that “Makeup Mirror” brings together AR makeup technology, face recognition and convolutional neural network algorithm to provide personalized service for the makeup industry. Technical support.

In fact, the function of one-touch makeup has already appeared in apps such as beauty cameras. What is the difference between the makeup function of the makeup mirror and the camera software?

“The beauty camera is more of an entertainment function, and the makeup mirror is a practical tool to restore the product effect as accurately as possible.” The person in charge responded that the makeup mirror provides a modeling tool for the merchant to calibrate the color tone according to the actual product. .

In addition, with professional measuring instruments, makeup mirrors can also provide skin testing services to assess the moisture, oil and elasticity of the skin.

 

泽原形象工作室的人工智能就发型和与你的服装提出建议的合适的发型

泽原形象工作室的人工智能美发设计产品是“Beauty JOYON诊断系统”。 只需用相机拍摄整个身体,AI就可以将面部平衡,骨骼等与大约1000人的数据进行分类,并建议最好的发型和衣服。

泽原形象工作室

洗发 染发 烫发 剪发价格合理 有国际美发大师主剪发,洗发 染发 烫发洗发 染发 烫发 剪发价格合理 有国际美发大师主剪发,洗发 染发 烫发

根据顾客的发质,完成客户护发咨询,负责跟进客户在护发过程中的各项事务处理,协助客户服务工作;通过深入了解顾客的头皮,头发和发根的情况,对头发做出详细分析和诊断,并为顾客度身定制养发护理方案和服务;追踪和观察每位顾客的养发 负责客人提供专业的美发服务;讲究职业道德,做到文明服务,保持美发的高水准服务,维护店面声誉;定期建立贵宾及常客档案,了解他(她)们的爱好、要求及发质的特性,以便更好地提供服务;完成店长安排的其他工作发型设计的后续工作,配合发型师完成发型的软化、上卷、上色等过程。染烫技术熟练。形象气质佳,有团队合作精神,踏实诚恳。吃苦耐劳,态度良好,有上进心。接待顾客,及美发咨询,洗,吹,梳理,及协助美发师完成烫染工作,完成美发师分配的其它工作。通过对人的面部、头部、颈部、肩部等穴位和经络的充分按摩,达到舒筋活血的功效,使头皮、毛囊、毛细血管恢复并增加活力,达到护发养发的目的。能独立完成剪发、吹风、造型、等日常工作

中国江苏省盐城市大丰区新村东路东苑一期

联系人国龙 qq 2668651154 手机 15358411774 微信 gl760017400

利用AI技术为用户改变形象,让发型的理想与现实不再有差距

理发的时候,最困扰的一点就是理想效果与现实效果总是有差距,利用 AI 技术为用户提供“改变自我形象”的平台,用户使用青丝,只需三步:拍照检测脸型、匹配发型、模拟体验,即可找到最适合自己脸型的发型。

与市面上已有的发型匹配APP不同,运用人脸的三维重建、肤色融合、光线恢复技术,把用户的照片换到模特脸上,将用户的照片与模特的照片的肤色、光线做到融合一致,更加真实,用户还可以查看正脸和侧脸,此外,青丝可以基于视觉技术进行人脸形状测量,确认脸型匹配发型,解决理发师与客户沟通障碍。而市面上已有的换发型APP,只是单纯的把假发贴到用户照片上,更像是一种贴纸,无法让用户真实匹配出最适合自己的发型,而且以往的APP都是把模特的发型直接抓出来,很耗费人力物力,且与当下流行的发型有延迟性,可以做到后台随时抓取欧美日韩时下最流行的发型,让用户能够随时更新,随时体验。

人工智能领域深耕多年,几乎每月都去理发,发现大部分理发店都没有相应的APP或者系统为顾客提供发型匹配服务,这就导致了顾客拿着照片去理发,但剪出来的效果却跟想象中的落差特别大,这就造成了顾客与理发师两方的困扰。因此,辛万江萌生了利用尖端科技为用户提供一个可以直观看到发型的服务平台。

盈利上,付费订阅的模式,向付费用户定期推送最新流行以及适合自己的发型。

一个电商的平台,目前主要收集人脸信息,未来将以这个为入口进入美发护发领域,基于AI技术,进行发质检测,确认发质类型,为用户推荐相关护发染发产品。辛万江认为,美发护发的市场足够大,前期只需要把产品做好,把准备工作铺好。

在IOS上线一个半月,获客渠道主要通过用户口碑传播。辛万江表示,团队目前的目标是把产品做好,全球美发护发的市场足够大,前期只需要把产品做好、准备工作铺好,后期切入美发护发赛道时自然就占据了优势。把发质检测的准确率提高,开始逐步布局美发护发的电商平台。

快发科技美发行业将在5年内出现巨龙企业

冬的脚步已渐渐逼近,就连南方的天气都已经呈现出寒冷。然而,在杭州市城市花园酒店,这里没有一丝寒冷的气息,反而充斥着一波一波的“高温”——浙江快发科技有限公司题为“定义新美业”的3周年庆典大会暨新产品发布会在这里隆重召开。

据悉,浙江快发科技有限公司是中国领先的快剪品牌,同时也是国内智能快剪的倡导者和领航者。近百名美发行业企业家参与了此次会议。

在会上,快发科技的董事长公布了行业最新数据:目前快发的快剪连锁门店数量总数已经达到800家,三年服务人群累计超过2000万人次。会上还发布了全球首款商业用洗发机器人,并推出了名为Smart Shop的美发行业的无人值守门店智能管理系统。

美发行业的供给侧改革来临

快发科技的董事长李强分享了新美业三年来的发展状况,2014年底,以快发为代表的企业将快剪这一模式引入中国,随后引发了中国美发行业的一股新风,各种快剪门店迅速在中国各地如雨后春笋般出现。

所谓快剪,是指去除了理发之外的大量冗余服务,理发店专注于给消费者剪发理型,也不向消费者兜售产品,更不会强行要求消费者办理动辄上千的昂贵会员卡,因而价格更加实惠、对消费者而言也更为省心、方便、快捷。这种理发模式起源于日本,引入中国后迅速得到中国消费者的欢迎。

“中国的美发行业的供给侧改革时代已经来临!”李强认为,这是时代发展的必然。传统的美发行业而言已经到了不破不立的时刻。未来的中国美发行业的市场将分化为两类:一部分是针对造型有着更高需要的人群,或者婚庆等特定应用场景。而主流市场将是针对主流、高频市场的理型类需求。李强指出:“多数人在多数时候,只需要对发型进行打理,保持当下造型就可以了。他们并不需要花几千元办一张卡,每次都花上半天时间等待冗长的服务,请‘Tony总监’进行一次全新的造型。快剪行业的蓬勃发展,正是理型类需求井喷的结果使然。”

在过去的三年中,尽管公司建立了令人侧目的盈利模型,800余家门店中90%以上都处于盈利状态,但始终保持了稳健的节奏,并未追求高速的增长。中国的美发行业不能只是简单复制日本的QB House模式,相比日本,中国有着更为广袤的市场、更发达的互联网环境、消费者的诉求更加多元化。因此,在过去的两年中,快发更多地将注意力放在了如何通过互联网+对管理体系的进行完善,如何通过智能机器人技术来提升门店效率和用户体验。而今这些模式的探索与技术的研发已经取得了突破性的进展,快发已经可以做到厚积薄发。在未来的5年之内,快发希望可以发展到5万加盟门店,让一亿人成为快发的活跃用户,令新美业成为主流服务产业。

智能门店管理系统将带来美发行业革命

在会上,快发科技展示了历时一年半研发的Smart Shop 5D门店智能管理系统。这一系统可以自动对理发店进行考勤、人员薪酬计算、门店收入的会记统计等功能,基本实现了对连锁门店的无人化管理,将让美发行业的管理效率大为提升。对于美发门店的投资者而言,这意味着可以省去20%以上的人力工资成本,能够实时掌握门店的运营状况,间接提升20%以上的有效收入。

这一无人值守的门店自动管理体系将给行业带来革命性的变化。“快发的门店那么赚钱,但我们却刻意放慢了发展的步伐,只开了800家,就是在等待这个系统。有了这种智能化的管理方式,中国新美业才告别了对QB house的简单模仿,并真正实现了互联网+。”

数据显示:在快发的连锁加盟体系中,投资者以6-8万元级别的资金投入开设一家门店,就能够获得每年200%的投资收益率。“在如此优秀的盈利模型面前,希望加盟的投资者趋之若鹜。然而李强和他们的团队们却一次次地将投资者们拒之门外。原因正是在于过去的三年中,快发科技一直致力于使用互联网+的方式,以实践去研究和解决美发行业的痛点,以期望对美发行业进行深度改革。

当互联网+轰轰烈烈地改变零售行业、金融行业的时候,美发这个领域却一直未能出现一家像滴滴、摩拜这样的巨头。李强认为:这是因为互联网对传统行业的改革已经进入了深水区。这个行业的痛点之多、管理之复杂,远超出零售、出行领域。

“到现在为止,多数连锁理发店都还停留在人管人的模式,效率极其低下。稍不留神就会因为疏于管理导致亏损。而行业对员工管理的核心方式依然是人治,只有通过增加管理人员、增加管理的复杂度才能改善。然而快剪的核心却是精简流程、精简人员,将注意力集中于满足消费者的剪发理型核心诉求上,从而提高性价比。当人治管理遇到了精简管理时,就出现了不可调和的矛盾。只有技术升级才能解决这个矛盾,我们相信在未来5年之后,互联网+的方式必将催生一个或者一批美发行业的巨龙企业。”

有了机器人,理发店将不再需要洗头工

在会上,一款智能洗头机器人,点燃了现场的气氛。这个机器人看上去像是太空舱与按摩椅的组合。当人躺下时,智能洗发机器人会通过一个头罩帮用户洗头,并同时进行身体按摩。该款机器人由快发的战略联盟企业杭州迅秀丽公司主导,快发科技与中国自动化学会共同参与研发,也是全球首款可投入商业应用的智能洗头机器人,相比人工,能够数十倍的提高操作效率,并节约成本80%以上。

“洗头对于顾客很舒服,对于洗头工而言,却是有害身体健康的一个职业。双手长期浸泡在热水和洗发液中,一个二十岁的小姑娘的手,很快就会被泡烂皲裂,变得像老人一样。”

无论是从效率而言,还是从健康而言,人工洗头都应该逐渐退出历史。而且在未来的十年中,人力成本将持续提升,成为理发店和消费者都难以接受的奢侈消费行为。

“洗头机器人将解放人工,减少洗发对操作者的伤害。”李强说,新美业行业有两点是曾被消费者担心的。一个担心是新美业店的服务效率很高,理发质量是否可以保障。实践证明,由于快发的理发师不推销、不办卡,只专注于理发,相比传统美业的理发师技艺更为纯熟,质量明显优于传统理发行业。第二个担心是只是用机器吸一下碎发,是否能够把头发清理干净。洗头机器人的出现,则完全打消了这个顾虑,而且同时给消费者提供了舒适的全身按摩。

“服务业的互联网+,仅仅靠写几行代码,发明一个模式,是远远不够的。这是一个融合了服务精神、互联网思维、智能设备于一体的产业。“李强说:“为此我们潜心耕耘了很久。在未来的五年中,无论是顾客,还是理发行业从业者,都将看到越来越多的惊喜变化:理发更省心、更便宜、理发效果更好了,开理发店更容易、更赚钱、更轻松了。这就是美发行业将迎来的供给侧改革!”

来听听美发师 Eric 们如何评价自己的职业 大 大霹雳 2017.05.18 我采访了59位发型师,觉得是时候改变一下对这个群体的刻板印象了。 过去一个多月,我采访了59位美发师。 在这之前,我和很多人一样,对这个群体有很多标签化的印象。总觉得发型师打扮浮夸,找各种时机推销美发产品或是办卡,不能理解自己口中对发型的要求……基于这种印象,每次走进美发店打理头发,心情都忐忑紧张,怕他们会表现出过度的热情让我无法招架。 在之后的采访中,我的紧张逐渐被一种羡慕所稀释 —— 这种羡慕在于发型师们对 “美” 的坦诚。 我小时候生活的大环境,对美是持压制态度的。比如我妈常说的话 :不要花太多时间照镜子,要把精力花在学习上;不要和别人比吃穿,要比学习;美的人头上顶块抹布都是美的,不美的人就不要浪费那个时间了……这些不断重复的观点,让人觉得爱美是一件和学习冲突的事,心里便有了 “爱美是错误的” 想法。后来上心理学课程,讲到马斯洛需要层次理论,审美其实是每个人都有的需求,是再合理不过的事。 大部分发型师,无论男女,对自己从小爱美这件事都很坦诚。加之入行时间早,他们在很小的年龄就保留了对美的直白追求,并付诸实践去探索美,不做丝毫的自我压抑。这其实是一件快乐幸福的事吧? 有的发型师把烫染技术当作科学研究痴迷其中,有的发型师骄傲于自己是同期学员里唯一得到老师赐名的人,有的发型师已经在为50岁后带着工具去旅行的退休工作准备,有充满热情和干劲想做到80岁的人,也有从业二十多年想彻底离开美发行业的人。 我选出了三位,他们或多或少代表着这个群体。 Eric,36岁,从业15年,店长 VICE :你是怎么入行的? Eric :去理发时,我和年龄相仿的发型师们很聊得来,他们说为什么不来学这行?和很多人的认知一样,觉得美发师这行好像像挺轻松、又赚钱,穿的也好看。于是找培训学校学习了三个月。 这三个月的学习其实挺糟糕,我觉得出去面试也没有底气和自信,于是转行做了空调维修工。有一天我在这一无人值守的门店自动管理体系将给行业带来革命性的变化。“快发的门店那么赚钱,但我们却刻意放慢了发展的步伐,只开了800家,就是在等待这个系统。有了这种智能化的管理方式,中国新美业才告别了对QB house的简单模仿,并真正实现了互联网+。”

数据显示:在快发的连锁加盟体系中,投资者以6-8万元级别的资金投入开设一家门店,就能够获得每年200%的投资收益率。“在如此优秀的盈利模型面前,希望加盟的投资者趋之若鹜。然而李强和他们的团队们却一次次地将投资者们拒之门外。原因正是在于过去的三年中,快发科技一直致力于使用互联网+的方式,以实践去研究和解决美发行业的痛点,以期望对美发行业进行深度改革。

当互联网+轰轰烈烈地改变零售行业、金融行业的时候,美发这个领域却一直未能出现一家像滴滴、摩拜这样的巨头。李强认为:这是因为互联网对传统行业的改革已经进入了深水区。这个行业的痛点之多、管理之复杂,远超出零售、出行领域。

“到现在为止,多数连锁理发店都还停留在人管人的模式,效率极其低下。稍不留神就会因为疏于管理导致亏损。而行业对员工管理的核心方式依然是人治,只有通过增加管理人员、增加管理的复杂度才能改善。然而快剪的核心却是精简流程、精简人员,将注意力集中于满足消费者的剪发理型核心诉求上,从而提高性价比。当人治管理遇到了精简管理时,就出现了不可调和的矛盾。只有技术升级才能解决这个矛盾,我们相信在未来5年之后,互联网+的方式必将催生一个或者一批美发行业的巨龙企业。”

有了机器人,理发店将不再需要洗头工

在会上,一款智能洗头机器人,点燃了现场的气氛。这个机器人看上去像是太空舱与按摩椅的组合。当人躺下时,智能洗发机器人会通过一个头罩帮用户洗头,并同时进行身体按摩。该款机器人由快发的战略联盟企业杭州迅秀丽公司主导,快发科技与中国自动化学会共同参与研发,也是全球首款可投入商业应用的智能洗头机器人,相比人工,能够数十倍的提高操作效率,并节约成本80%以上。

“洗头对于顾客很舒服,对于洗头工而言,却是有害身体健康的一个职业。双手长期浸泡在热水和洗发液中,一个二十岁的小姑娘的手,很快就会被泡烂皲裂,变得像老人一样。”

无论是从效率而言,还是从健康而言,人工洗头都应该逐渐退出历史。而且在未来的十年中,人力成本将持续提升,成为理发店和消费者都难以接受的奢侈消费行为。

“洗头机器人将解放人工,减少洗发对操作者的伤害。”李强说,新美业行业有两点是曾被消费者担心的。一个担心是新美业店的服务效率很高,理发质量是否可以保障。实践证明,由于快发的理发师不推销、不办卡,只专注于理发,相比传统美业的理发师技艺更为纯熟,质量明显优于传统理发行业。第二个担心是只是用机器吸一下碎发,是否能够把头发清理干净。洗头机器人的出现,则完全打消了这个顾虑,而且同时给消费者提供了舒适的全身按摩。

“服务业的互联网+,仅仅靠写几行代码,发明一个模式,是远远不够的。这是一个融合了服务精神、互联网思维、智能设备于一体的产业。“李强说:“为此我们潜心耕耘了很久。在未来的五年中,无论是顾客,还是理发行业从业者,都将看到越来越多的惊喜变化:理发更省心、更便宜、理发效果更好了,开理发店更容易、更赚钱、更轻松了。这就是美发行业将迎来的供给侧改革!”

来听听美发师 Eric 们如何评价自己的职业 大 大霹雳 2017.05.18 我采访了59位发型师,觉得是时候改变一下对这个群体的刻板印象了。 过去一个多月,我采访了59位美发师。 在这之前,我和很多人一样,对这个群体有很多标签化的印象。总觉得发型师打扮浮夸,找各种时机推销美发产品或是办卡,不能理解自己口中对发型的要求……基于这种印象,每次走进美发店打理头发,心情都忐忑紧张,怕他们会表现出过度的热情让我无法招架。 在之后的采访中,我的紧张逐渐被一种羡慕所稀释 —— 这种羡慕在于发型师们对 “美” 的坦诚。 我小时候生活的大环境,对美是持压制态度的。比如我妈常说的话 :不要花太多时间照镜子,要把精力花在学习上;不要和别人比吃穿,要比学习;美的人头上顶块抹布都是美的,不美的人就不要浪费那个时间了……这些不断重复的观点,让人觉得爱美是一件和学习冲突的事,心里便有了 “爱美是错误的” 想法。后来上心理学课程,讲到马斯洛需要层次理论,审美其实是每个人都有的需求,是再合理不过的事。 大部分发型师,无论男女,对自己从小爱美这件事都很坦诚。加之入行时间早,他们在很小的年龄就保留了对美的直白追求,并付诸实践去探索美,不做丝毫的自我压抑。这其实是一件快乐幸福的事吧? 有的发型师把烫染技术当作科学研究痴迷其中,有的发型师骄傲于自己是同期学员里唯一得到老师赐名的人,有的发型师已经在为50岁后带着工具去旅行的退休工作准备,有充满热情和干劲想做到80岁的人,也有从业二十多年想彻底离开美发行业的人。 我选出了三位,他们或多或少代表着这个群体。 Eric,36岁,从业15年,店长 VICE :你是怎么入行的? Eric :去理发时,我和年龄相仿的发型师们很聊得来,他们说为什么不来学这行?和很多人的认知一样,觉得美发师这行好像像挺轻松、又赚钱,穿的也好看。于是找培训学校学习了三个月。 这三个月的学习其实挺糟糕,我觉得出去面试也没有底气和自信,于是转行做了空调维修工。有一天我在 《现代快报》 上看到一家台湾老板的美发店招聘学徒的信息,心想有这个机会不如到理发店重头来过吧。于是交了学费和押金,开始了当学徒的生活。 学徒的生活如何? 师傅工作前,我会把所有的工具都准备好;而工作完后,不用他说我也会清理放好。那批十几个人,好像只有我是两星期后就可以拿工资的。现在作为这家品牌店的店长,都是我操心学徒的生活,给他们生活费,他们也不用再做杂事。 学徒当然也是辛苦的,除了周日开例会,每周三、四、五工作结束后都要练习到很晚。基本每天都满满当当。他们的辛苦在于技术练习的辛苦,而我们当时的辛苦还有精力分散的辛苦。 除此以外,年轻发型师有什么不一样吗? 95后的年轻一代比我们有想法多了,他们会用各种社交软件来发展客户。在上面聊天,找一些网红来做头发,网红来他们会很开心。我也下过那些软件,聊两句就聊不下去了,可能真的是代沟深了。 最起码从艺名上看不出来你们有代沟。 发型师基本都有艺名。意识到英文名逐渐成为趋势后,我也请朋友帮忙列了好几个。可 Tony、Steven 这些实在太多了,选来选去选了 Eric,用的人相对少,也没有那么俗。你要说对艺名这事多么郑重倒也没有,我们店里还有个小伙给自己取名叫 Biu Biu 的。 还有窄脚裤、甩尖皮鞋、五种颜色的头发。 很多人对发型师有些固有的印象,素质差啊、文化低啊、好色啊 —— 不能否认有这样人群的存在,有时候我也会接触到油腔滑调的同行。可其实每个行业都有这样的例子存在。对这些固有的标签式印象我倒不是太在意,做好自己就行。 我特别相信一句话,你是什么层次的人,你的客户群就是什么样的。其实我们这行的投入很大,技术学习、服装形象打理、工 

Use AI technology to change the image for users, so that there is no longer a gap between the ideal and reality of hairstyles.

When you are getting a haircut, the most troublesome thing is that there is always a gap between the ideal effect and the actual effect. Using AI technology to provide users with a platform to “change their self-image”, the user uses the blue silk, only three steps: photo detection, matching hairstyle, simulation Experience and find the hairstyle that best suits your face.

Different from the existing hairstyle matching APP on the market, using the three-dimensional reconstruction of face, skin color fusion, and light recovery technology, the user's photo is changed to the model face, and the user's photo is merged with the skin color and light of the model's photo. Consistent and more realistic, users can also view the face and side face. In addition, the blue wire can be used to measure the shape of the face based on visual technology, confirm the face matching hairstyle, and solve the communication barrier between the barber and the customer. The existing hair-changing app on the market simply sticks the wig to the user's photo, which is more like a sticker, which can't make the user match the hair style that suits you best, and the previous APP is the model. Hairstyles are directly captured, which is very labor-intensive and has a delay with the current popular hairstyles. It can be used to capture the most popular hairstyles in Europe, America, Japan and Korea at any time, so that users can update and feel at any time.

 

In the field of artificial intelligence, I have been cultivating for many years, and I go to the haircut almost every month. I found that most hairdressers don’t have the corresponding APP or system to provide customers with hair matching services. This leads to the customer taking photos to get a haircut, but the effect of cutting it out is The imaginary gap is particularly large, which has caused both the customer and the barber. Therefore, Xin Wanjiang has created a service platform that can provide users with a visual view by using cutting-edge technology.

In terms of profitability, the paid subscription model regularly pushes the latest popular and suitable hairstyles to paying users.

An e-commerce platform currently collects face information. In the future, it will enter the hair care and hair care field with this as the entrance. Based on AI technology, it will perform hair quality testing, confirm the hair type, and recommend related hair care hair dye products for users. Xin Wanjiang believes that the market for hair care and hair care is large enough. In the early stage, only the products need to be done well, and the preparation work should be paved.

One and a half months after the IOS went online, the customer channel was mainly spread through the user's word of mouth. Xin Wanjiang said that the team's current goal is to make the products well. The global market for hair care and hair care is large enough. In the early stage, the products only need to be prepared and ready for work. When they cut into the hair care and hair care track, they naturally have an advantage. Improve the accuracy of hair quality testing, and begin to gradually deploy the e-commerce platform for hair care.

 

Fast hair technology hairdressing industry will appear in the dragon enterprise within 5 years

The footsteps of winter have gradually approached, and even the weather in the south has been cold. However, in Hangzhou City Garden Hotel, there is no cold atmosphere here, but it is full of waves of “high temperature” – Zhejiang Express Technology Co., Ltd. titled “Defining New Beauty” 3rd Anniversary Celebration and New The product launch conference was held here.

It is reported that Zhejiang Fast Hair Technology Co., Ltd. is China's leading fast-cutting brand, and also an advocate and leader of domestic intelligent fast-cutting. Nearly one hundred hairdressing industry entrepreneurs participated in the meeting.

At the meeting, the chairman of the company issued the latest data of the industry: the total number of quick-cut chain stores has reached 800, and the total number of serviced people in the three-year service has exceeded 20 million. The world's first commercial shampoo robot was also released, and an unattended store intelligent management system called the Smart Shop was launched.

The supply side reform of the hairdressing industry is coming

Li Qiang, Chairman of Quickfa Technology, shared the development of New America in the past three years. At the end of 2014, enterprises represented by Fast Hair will introduce this model into China, which triggered a new trend in China's hairdressing industry. The fast-cut shop quickly sprang up in all parts of China.

The so-called fast cut refers to the removal of a large amount of redundant services other than haircuts. The barber shop focuses on cutting haircuts for consumers, and does not sell products to consumers, nor does it force consumers to handle thousands of expensive ones. The membership card is therefore more affordable and more worry-free, convenient and fast for consumers. This haircut model originated in Japan and was quickly welcomed by Chinese consumers when it was introduced to China.

"The era of supply-side reform in China's hairdressing industry has arrived!" Li Qiang believes that this is the inevitable development of the times. The traditional hairdressing industry has reached a moment of unbreakableness. The market for the future Chinese hairdressing industry will be divided into two categories: one for people with higher needs for styling, or for specific application scenarios such as weddings. The mainstream market will be the demand for the mainstream type of high-frequency market. Li Qiang pointed out: "Most of the time, most people only need to take care of the hair style, and keep the current shape. They don't need to spend a few thousand dollars to get a card. Every time they spend half a day waiting for lengthy service, please 'Tony Director' has a new look. The rapid development of the fast-cutting industry is the result of the demand for the sizing type of demand."

 

In the past three years, despite the company's impressive profit model, more than 90% of the more than 800 stores are in a profitable state, but they have maintained a steady pace and have not pursued high-speed growth. China's hairdressing industry can't just simply copy Japan's QB House model. Compared with Japan, China has a broader market, a more developed Internet environment, and more diversified consumer appeals. Therefore, in the past two years, Quickfare has paid more attention to how to improve the management system through the Internet+, and how to improve store efficiency and user experience through intelligent robot technology. Nowadays, the exploration of these models and the research and development of technology have made breakthrough progress, and fast-breaking has been able to achieve a lot of growth. In the next five years, Quickfare hopes to develop into 50,000 franchise stores, and make 100 million people become fast-moving active users, making Xinmeiye a mainstream service industry.

Smart store management system will bring revolution in the hairdressing industry

At the meeting, Quickfa Technology demonstrated the smart shop 5D store intelligent management system that was developed over a year and a half. This system can automatically perform the functions of attendance, staff salary calculation and store revenue statistics of the barber shop, basically realizes the unmanned management of the chain stores, which will greatly improve the management efficiency of the hairdressing industry. For the hair salon customers, this means that more than 20% of the labor cost can be saved, and the operation status of the store can be grasped in real time, and the effective income of more than 20% can be indirectly increased.

 

This unattended automatic store management system will revolutionize the industry. “The fast-moving stores are so profitable, but we have deliberately slowed down the pace of development. We only opened 800, and we are waiting for this system. With this intelligent management method, China’s New America has bid farewell to QB. The simple imitation of the house and the real realization of the Internet +."

The data shows that in the fast-launch franchise system, investors can invest 200-80,000 yuan to open a store, and they can get an annual investment return rate of 200%. “In the face of such a good profit model, investors who want to join are eager to get there. However, Li Qiang and their team have repeatedly turned investors out. The reason is that in the past three years, Committed to using the Internet + way to practice and solve the pain points of the hairdressing industry, in the hope of deep reform of the hairdressing industry.

When the Internet + vigorously changed the retail industry and the financial industry, the hairdressing industry has not been able to show a giant like Didi and Mobai. Li Qiang believes that this is because the reform of the traditional industry in the Internet has entered the deep water area. This industry has many pain points and complicated management, far exceeding the retail and travel fields.

"Up to now, most chain barbershops have remained in a mode of people management, and their efficiency is extremely low. A little carelessness will lead to losses due to neglect of management. The core way of industry management of employees is still rule by people, only by increasing The complexity of management and management can be improved. However, the core of the fast cut is to streamline the process, streamline the personnel, and focus on satisfying the core appeal of the consumer's haircut type, thus improving the cost performance. When streamlining management, there is an irreconcilable contradiction. Only technology upgrades can solve this contradiction. We believe that in the next five years, the Internet+ approach will certainly give birth to one or a group of dragon companies in the hairdressing industry."

With a robot, the hairdresser will no longer need a shampooer

At the meeting, a smart shampoo robot ignited the atmosphere of the scene. This robot looks like a combination of a capsule and a massage chair. When a person lie down, the smart shampoo robot washes the head through a hood and simultaneously massages the body. The robot is led by Hangzhou Xunmei Co., a fast-paced strategic alliance company. Quickfat Technology and China Automation Society participate in R&D. It is also the world's first intelligent shampoo robot that can be put into commercial use. It can be dozens of times more than manual. Improve operational efficiency and save costs by more than 80%.

 

“The shampoo is very comfortable for the customer. For the shampoo, it is a profession that is harmful to the health. The hands are soaked in hot water and shampoo for a long time, the hand of a 20-year-old girl will soon It is soaked and smashed and becomes like an old man."

Whether in terms of efficiency or health, artificial shampoo should gradually withdraw from history. And in the next decade, labor costs will continue to increase, becoming a luxury consumer behavior that is unacceptable to barbershops and consumers.

“The shampooing robot will liberate labor and reduce the damage that shampoo can do to the operator.” Li Qiang said that there are two points in the new US industry that have been worried by consumers. One concern is that the service of the new beauty store is very efficient and the quality of the haircut can be guaranteed. Practice has proved that because the hairdresser does not sell, does not do card, only focus on haircut, compared to the traditional American hairdresser skills more sophisticated, the quality is significantly better than the traditional hairdressing industry. The second worry is just to use a machine to suck the broken hair, whether it can clean the hair. The emergence of shampooing robots completely dispels this concern and at the same time provides consumers with a comfortable body massage.

"Internet + in the service industry, it is not enough to write a model by writing a few lines of code. This is an industry that combines the spirit of service, Internet thinking, and smart devices." Li Qiang said: "For this we I have been working hard for a long time. In the next five years, both customers and hairdressing industry practitioners will see more and more surprise changes: haircut is more worry-free, cheaper, better haircut, open haircut The store is easier, more profitable, and easier. This is the supply-side reform that the hairdressing industry will welcome!"

Listen to the hairdresser How Erics evaluate their careers Big 2017.05.18 I interviewed 59 hair stylists and felt it was time to change the stereotype of this group. Over the past month or so, I interviewed 59 hairdressers. Before that, I, like many people, had a lot of labeling impressions on this group. I always feel that the hair stylist is arrogant, looking for various opportunities to sell hair products or do cards, can not understand the requirements of my hair in the mouth... Based on this impression, every time I walk into the hairdressing salon to take care of my hair, I feel nervous and afraid. They will show excessive enthusiasm that makes me unable to fight. In the following interviews, my tension was gradually diluted by an envy - this kind of envy is the honesty of the hair stylists to "beauty." The environment in which I lived as a child was a repressive attitude towards the United States. For example, what my mom often said: Don't spend too much time looking in the mirror, spend your energy on learning; don't compare with others to eat and wear, be better than learning; beautiful people's top rags are beautiful, not beautiful people don't waste That time... These repetitive views make people think that beauty is a matter of learning conflicts, and there is a thought that "the beauty is wrong". Later on the psychology course, it was said that Maslow needed a hierarchy theory. Aesthetics is actually a demand that everyone has, and it is a reasonable thing. Most hair stylists, both men and women, are honest about their childhood love. Coupled with the early entry time, they kept the straightforward pursuit of beauty at a very young age, and put it into practice to explore the beauty, not to do the slightest self-repression. Is this actually a happy and happy thing? Some hair stylists use hot dyeing technology as a science obsessed. Some hair stylists are proud of being the only ones who have been given a teacher's name in the same period. Some hair stylists are already traveling with tools after 50 years old. Preparing for retirement, there are people who are passionate and motivated to be 80 years old, and those who have been working in the hairdressing industry for more than 20 years. I have chosen three people who represent this group more or less. Eric, 36, has been in business for 15 years, manager VICE: How did you get involved? Eric: When I went to the haircut, I was very happy with the hair stylists of similar age. They said why not come to school? Like many people's cognition, I feel that the hairdresser's line seems to be very easy, make money, and wear good looks. So I went to the training school for three months. The three-month study was actually quite bad. I felt that I didn't have the confidence and confidence to go out for an interview, so I switched to an air-conditioning maintenance worker. One day, I saw the information of an apprenticeship in a Taiwanese boss's hairdressing salon in Modern Express. I thought it would be better to have a chance to come to the barbershop. So I paid tuition and deposits and started my life as an apprentice. What is the life of an apprentice? Before the master works, I will prepare all the tools; after the work, I will clean it up without saying that he will. The batch of dozens of people seems to have only two weeks after I can get a salary. Now as the manager of this brand store, I am a life of apprenticeship, giving them living expenses, they do not have to do chores. Apprentices are of course hard work. Except for the regular meeting on Sunday, they should practice late after work every Wednesday, Thursday and Friday. Basically full of daily life. Their hard work lies in the hard work of technical practice, and our hard work at that time and the hard work of distraction. Besides, what is the difference between young hair stylists? The younger generation after 95 has more ideas than we do, and they use a variety of social software to develop customers. Chat on it, find some net red to do hair, net red will be very happy. I also went through the software, and I couldn’t talk about it after two sentences. It might be that the generation gap is deep. At the very least, you can't see from the stage name that you have a generation gap. Hair stylists basically have a stage name. After realizing that the English name has gradually become a trend, I also asked my friends to help me. Tony, Steven, there are so many of them, and they chose to choose Eric. They use relatively few people and are not so vulgar. You have to say that there is no such thing as a serious name for the stage name. There is another guy in our store who named herself Biu Biu. There are also narrow-leg pants, pointed leather shoes, and five colors of hair. Many people have some inherent impressions on hair stylists, poor quality, low culture, and lasciviousness. —— There is no denying the existence of such a crowd. Sometimes I will also be exposed to the oily slippery peers. In fact, every industry has such an example. I don't care too much about these inherent labeling impressions. Just do it yourself. I especially believe in one sentence, what level of people are you, and what kind of customer group you are. In fact, our investment in this line is very big, technology learning, clothing image management, work


With input and so on. At first I couldn't understand the boss's point of view: Even if you don't start with a lot of income, you have to be willing to invest in yourself. The less you are willing to invest, the poorer you will be. Later, I understood this sentence very well: I am dressed in a clean and well-dressed manner, and my spirit looks good. Naturally, I will be favored by high-level customers. How can I make progress? Every hair stylist has its own business pressure, so it will promote the time of doing hair. What is your relationship with the customer? I have been in this store for ten years, and there are many customers of similar age. I have been following me from college to work, getting married and having children. The trust of the guests is the greatest sense of accomplishment that this job has given me. Last year, I had the plan to buy a house. The funds were still a little worse. A guest said to me: "Eric, you have to hurry to buy a house. The house price is higher every day. Let the family and children live a stable life. If you don't have enough money, I will lend you. 100,000, pay off within one year." Two customers borrowed a total of 150,000 for me. I bought the house years ago and it was particularly big for me. Even relatives will not necessarily lend themselves to this amount of money. So this job gives you a lot of money besides money.

It is good to do hairdressing in the hairdressing. In my opinion, this is not an industry that has no "nothing". My wife and I are both in the hairdressing industry. Now my daughter is three years old and she is particularly smug. Sometimes she will learn how to look like an adult with a hair curler with no power. If my daughter still likes "smelly beauty" when she is older, I will not interfere. If I need to even teach her how to be more beautiful. If she wants to be a hairdresser in the future, I am willing to support her at a sufficient cost to study. I think everyone's views on the hairdressing industry really have to change. Bingbing, 35 years old, has been in business for 15 years, designer VICE: Why are you not called Mary or Lina? Bingbing: When my parents want a boy, they give me a name that is more strokes and boyish. I don't like it. Fan Bingbing and Xiaoyanzi were very popular ten years ago. I think it is particularly nice, so I used my "ice ice" to make my own stage name. I was particularly unaccustomed at the beginning, and there was a kind of awkwardness that I was afraid of being seen by others, but I became more and more fond of it. The place where I work now requires each hair stylist to start with an English name. I have no interest in it. I just got one, and I like people to call me. I feel very warm when I hear this name. How did you get involved? When I was 18, I walked down the street one day and passed by a hair salon that felt very delicate. I saw from the Ask me: "Ice, do you think this dress is right for me? Is this bag suitable for me?" Laohu, 45 years old, working for 24 years, community Affordable hair salon owner VICE: Is your stage name Laohu? Lao Hu: The hair stylist of our parity store has no stage name, and it does not require special image, but it should be kept clean and tidy. There are also studies for going out, but I am very young and I am watching video teaching. Why are hair stylists always considered to be exaggerated? I don't mind this stereotype of the outside world. This is a personal style problem, and the people who do this should be trendy, not new. How did you get involved? It’s my dad who let me learn this, so the hairdressing industry is a job for me, I don’t know how much I like it. I have not done other professions. I have been a hair stylist since I learned this. I later owned this store and became my own boss. Do you feel any changes in the hairdressing industry? There are fewer and fewer people who are hair stylists now, and many people around me have changed. I have seen a data that the number of hair stylists has been reduced from 5 million to 2 million. But also, this trip is too hard. From early morning to late, my legs are standing on varicose veins. In a short period of time, I have to close the door and do not do it. I am ready to leave completely. I will not step into the hairdressing industry and go to other units to work. What is the difference between opening a store and being a hair stylist? Because I am not doing it, I will talk about my heart. The biggest difference is that the income is more. However, it is more tiring to open a store, and it is not easy to manage people. This year, the employee is the boss and the boss is the buddy. Business is getting more and more cheap, and some stores are willing to cut seven or eight. What do you say we do? I think hair stylists should also comment on customers, not always being commented by customers. I have been doing this for so many years, I have seen all kinds of people, and I have a little self-conclusion: Men over 40 years old are the least angry, especially wearing glasses. If a shop cuts the head by man, it will definitely not do it. The woman's hot dyed hair, 1,200 are willing to go; the old man's hair cut a dozen pieces he is too expensive. How many stores can't do it now? Before, there was a guest who brought her own child to cut her hair, because she was too short to cut her child, and I wanted to shop. We apologize to appease for a long time, people are tired and tired. So I have to leave this industry completely. For more than 20 years, I will certainly be reluctant to make my own craft, but there is no way, the legs are not good, and the business is difficult to do.

New Year, are you hot? Artificial intelligence subverts the hairdressing service industry

  

 "There is money without money,

Hot ~~~~

~~~ head

~~~年"

 

The Spring Festival is approaching, and the office is filled with the medicinal taste of the hot head every day, so that Xiaobian feels that he has become a teacher of Tony. The big project with hundreds of millions of people involved - the hotspot of the New Year, is proceeding in an orderly manner.

but! Do you have such an experience...

 

Hot heading can be a happy thing, but in many cases it can also turn into a nightmare.

With the improvement of economic level, consumption upgrading is a general trend, and the essence of consumption upgrading is human pursuit of personalized and customized services. As an extremely personalized service, hairdressing service has extremely high requirements for “private customization”. The same hair style is affected by factors such as personal temperament, face shape and hair quality. The effect presented by different people is completely the same.

In the traditional hairdressing service, the needs of consumers are difficult to fully express and realize. The “private customization” requirements of the hairdressing service industry also restrict the development of the industry. The hairdressing service industry is also known as one of the slowest industries in the world today. .

 

Artificial intelligence seems to be the most talked about topic in recent technology. Since AlphaGo won the professional Go player in 2016, artificial intelligence has often grabbed the headlines of various media.

At present, it is demonstrating a strong development momentum: in social networking and e-commerce platforms, through the artificial intelligence technology for accurate advertising push, recommend users to favorite online retailers and products of interest; in the financial industry, the use of Artificial intelligence analyzes historical fraud cases in large numbers, and then uses this knowledge to prevent future fraud and reduce financial risks; artificial intelligence also greatly improves the consumer experience, and our daily interaction with technology, such as artificial intelligence through mobile phones Devices such as tablets and tablets allow us to experience technology more immersively;

In the future, artificial intelligence will direct autonomous vehicles to drive through towns, freeing our hands and reducing the likelihood of accidents.

 

The arrival of artificial intelligence has repeatedly subverted people's original perception of the industry. Many industries have begun to face the industry opportunities and changes brought about by artificial intelligence technology, and the hairdressing industry is no exception. The development of artificial intelligence represents the arrival of mass personalization, not only the products used, but also the personalized services, which will undergo a huge transformation.

Intelligent hair styling products based on artificial intelligence technology generally include four aspects: “smart recognition”, “smart matching”, “smart expression” and “smart learning”.

 

Scanning the head and face of the customer, extracting the face style features from a large amount of data through image recognition technology, that is, through the deep learning of the computer neural network, implementing a system algorithm that continuously self-iterates with the data volume to realize the customer's Style diagnosis and feature judgment.

 

Through the neural network engine, the hair styling principle is intelligent, and the hair style recommendation is made according to the result of customer characteristics and personal style recognition. At the same time, according to the hair stylist behavior data and consumer feedback, continuous iterative optimization is performed to give personalized hairdressing suggestions. .

 

Through the intelligent expression system, the above intelligent matching results are presented to the customer more intuitively, so that the customer has a more vivid and vivid understanding of the hairdressing result before receiving the hairdressing service.

 

The intelligent hair styling product also has the ability of self-learning, through continuous learning and evolution, to improve the accuracy of face and style judgment, and to give the best hair design scheme in line with the public aesthetic.

This artificial intelligence-based product can reverse the one-way output behavior of the hair stylist in the traditional hairdressing scene to the behavior led by the C-end customer demand, and assist the hairdresser to design a hairstyle that is more suitable for the customer. It will help to enhance the personalized and customized service quality of the hairdressing industry and bring customers a consumption upgrade experience.

It is believed that with the promotion of artificial intelligence technology, the future hairdressing service industry will usher in greater development.

Deep learning AI beauty tutorial----AI hairdressing algorithm special effects

To change the color of a person's hair in a photo or video, this technology has been used in applications such as mobile P apps, Meitu Xiu Xiu, etc., and has won the favor of many users.

How do I change the color of a person's hair in a photo or video?

The process of changing the coloring algorithm is shown in the following figure:

 

1, AI hair split module

Target segmentation algorithms based on deep learning are relatively mature, and more commonly used are FCN, SegNet, UNet, PspNet, DenseNet and so on.

Here we use the Unet network for hair splitting. For details, please refer to the following link: Click to open the link.

Unet hair split code is as follows:

 

1. def get_unet_256(input_shape=(256, 256, 3),

2. num_classes=1):

3. inputs = Input(shape=input_shape)

4. # 256

5.

6. down0 = Conv2D(32, (3, 3), padding='same')(inputs)

7. down0 = BatchNormalization()(down0)

8. down0 = Activation('relu')(down0)

9. down0 = Conv2D(32, (3, 3), padding='same')(down0)

10. down0 = BatchNormalization()(down0)

11. down0 = Activation('relu')(down0)

12. down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)

13. # 128

14.

15. down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)

16. down1 = BatchNormalization()(down1)

17. down1 = Activation('relu')(down1)

18. down1 = Conv2D(64, (3, 3), padding='same')(down1)

19. down1 = BatchNormalization()(down1)

20. down1 = Activation('relu')(down1)

21. down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)

22. # 64

twenty three.

24. down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)

25. down2 = BatchNormalization()(down2)

26. down2 = Activation('relu')(down2)

27. down2 = Conv2D(128, (3, 3), padding='same')(down2)

28. down2 = BatchNormalization()(down2)

29. down2 = Activation('relu')(down2)

30. down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)

31. # 32

32.

33. down3 = Conv2D(256, (3, 3), padding='same')(down2_pool)

34. down3 = BatchNormalization()(down3)

35. down3 = Activation('relu')(down3)

36. down3 = Conv2D(256, (3, 3), padding='same')(down3)

37. down3 = BatchNormalization()(down3)

38. down3 = Activation('relu')(down3)

39. dOwn3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)

40. # 16

41.

42. down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)

43. down4 = BatchNormalization()(down4)

44. down4 = Activation('relu')(down4)

45. down4 = Conv2D(512, (3, 3), padding='same')(down4)

46. ​​down4 = BatchNormalization()(down4)

47. down4 = Activation('relu')(down4)

48. down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)

49. # 8

50.

51. center = Conv2D(1024, (3, 3), padding='same')(down4_pool)

52. center = BatchNormalization()(center)

53. center = Activation('relu')(center)

54. center = Conv2D(1024, (3, 3), padding='same')(center)

55. center = BatchNormalization()(center)

56. center = Activation('relu')(center)

57. # center

58.

59. up4 = UpSampling2D((2, 2))(center)

60. up4 = concatenate([down4, up4], axis=3)

61. up4 = Conv2D(512, (3, 3), padding='same')(up4)

62. up4 = BatchNormalization()(up4)

63. up4 = Activation('relu')(up4)

64. up4 = Conv2D(512, (3, 3), padding='same')(up4)

65. up4 = BatchNormalization()(up4)

66. up4 = Activation('relu')(up4)

67. up4 = Conv2D(512, (3, 3), padding='same')(up4)

68. up4 = BatchNormalization()(up4)

69. up4 = Activation('relu')(up4)

70. # 16

71.

72. up3 = UpSampling2D((2, 2))(up4)

73. up3 = concatenate([down3, up3], axis=3)

74. up3 = Conv2D(256, (3, 3), padding='same')(up3)

75. up3 = BatchNormalization()(up3)

76. up3 = Activation('relu')(up3)

77. up3 = Conv2D(256, (3, 3), padding='same')(up3)

78. up3 = BatchNormalization()(up3)

79. up3 = Activation('relu')(up3)

80. up3 = Conv2D(256, (3, 3), padding='same')(up3)

81. up3 = BatchNormalization()(up3)

82. up3 = Activation('relu')(up3)

83. # 32

84.

85. up2 = UpSampling2D((2, 2))(up3)

86. up2 = concatenate([down2, up2], axis=3)

87. up2 = Conv2D(128, (3, 3), padding='same')(up2)

88. up2 = BatchNormalization()(up2)

89. up2 = Activation('relu')(up2)

90. up2 = Conv2D(128, (3, 3), padding='same')(up2)

91. up2 = BatchNormalization()(up2)

92. up2 = Activation('relu')(up2)

93. up2 = Conv2D(128, (3, 3), padding='same')(up2)

94. up2 = BatchNormalization()(up2)

95. up2 = Activation('relu')(up2)

96. # 64

97.

98. up1 = UpSampling2D((2, 2))(up2)

99. up1 = concatenate([down1, up1], axis=3)

100. up1 = Conv2D(64, (3, 3), padding='same')(up1)

101. up1 = BatchNormalization()(up1)

102. up1 = Activation('relu')(up1)

103. up1 = Conv2D(64, (3, 3), padding='same')(up1)

104. up1 = BatchNormalization()(up1)

105. up1 = Activation('relu')(up1)

106. up1 = Conv2D(64, (3, 3), padding='same')(up1)

107. up1 = BatchNormalization()(up1)

108. up1 = Activation('relu')(up1)

109. # 128

110.

111. up0 = UpSampling2D((2, 2))(up1)

112. up0 = concatenate([down0, up0], axis=3)

113. up0 = Conv2D(32, (3, 3), padding='same')(up0)

114. up0 = BatchNormalization()(up0)

115. up0 = Activation('relu')(up0)

116. up0 = Conv2D(32, (3, 3), padding='same')(up0)

117. up0 = BatchNormalization()(up0)

118. up0 = Activation('relu')(up0)

119. up0 = Conv2D(32, (3, 3), padding='same')(up0)

120. up0 = BatchNormalization()(up0)

121. up0 = Activation('relu')(up0)

122. # 256

123.

124. classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0)

125.

126. model = Model(inputs=inputs, outputs=classify)

127.

128. #model.compile(optimizer=RMSprop(lr=0.0001), loss=bce_dice_loss, metrics=[dice_coeff])

129.

130. return model

An example of the split effect is as follows:

 

The training and test data sets used are all ready for you.

2, hair color change module

This module looks simpler, but it is not.

This module is subdivided into 1 hair color enhancement and correction module; 2 color space dyeing module; 3 hair detail enhancement;

1 hair color enhancement and correction module

Why color enhancement and correction?

First look at the following set of figures, we directly use the HSV color space to dye pure black hair, the target color is purple, the results are as follows:

 

As you can see, for the original picture above, the hair is darker. After the hair color change in the HSV color space, the effect picture is not obvious, only a slight color change;

Why does this happen? The reasons are as follows:

Let's take the RGB and HSV color space as an example. Let's first look at the conversion formula between HSV and RGB:

Let (r, g, b) be the red, green, and blue coordinates of a color, respectively, whose values ​​are real numbers between 0 and 1. Let max be equivalent to the largest of r, g and b. Let min be equal to the smallest of these values. To find the (h, s, l) value in the HSL space, where h ∈ [0, 360) degrees is the hue angle of the angle, and s, l ∈ [0, 1] is the saturation and brightness, calculated as :

 

We assume that the hair is pure black, R = G = B = 0, then according to the HSV formula can get H = S = V = 0;

Suppose we want to replace the hair color with red (r=255, g=0, b=0);

Then, we first convert the red to the corresponding hsv, then retain the original black hair V, the red hair hs, recombine the new hsV, and convert to the RGB color space, which is the effect after the hair color change (hs is the color Attribute, v is the brightness attribute, retaining the brightness of the original black hair, replacing the color attribute to achieve the color change purpose);

The formula for converting HSV to RGB is as follows:

 

For black, we calculate the result is H = S = V = 0, since V = 0, therefore, p = q = t = 0, regardless of the hs value of the target color, rgb is always 0, that is, black;

Thus, although we used red to replace black hair, the result is still black, and the conclusion is the hsv/hsl color space, which cannot change color for black.

Below, we give the daily color change effect of the purple P picture and the beauty camera:

 

Compared with the results of the previous HSV color space, we can clearly see that the daily P picture and the beauty camera are more effective, better looking, and the perfect color change for the near black hair;

For the above reasons, we need to perform some enhancement processing on the hair area in the image: brighten and slightly change the color tone;

This step can usually be done by brightening the color on the PS and then using the LUT to process it;

The coloring effect after the brightening is as shown below:

 

It can be seen that it is basically similar to the beauty camera and the daily P picture.

2HSV/HSL/YCbCr color space color change

This step is relatively simple, leaving the brightness component unchanged, and replacing other colors and tonal components with the target hair color.

Here is an example of the HSV color space:

If we want to dye the hair half-cyan, usually pink, then we build the color MAP as shown below:

 

For each pixel point P of the hair region, we convert the RGB of P to the HSV color space to get H/S/V;

According to the positional relationship of P in the original hair area, we find the pixel D of the corresponding position in the color MAP, convert the RGB of D into the HSV color space, and get the h/s/v of the target color;

Reorganize hsV according to the target color, then convert to RGB;

The code for this module is as follows:

 

1. // h = [0,360], s = [0,1], v = [0,1]

2. void RGBToHSV(int R, int G, int B, float* h, float* s, float * v)

3. {

4. float min, max;

5. float r = R / 255.0f;

6. float g = G / 255.0f;

7. float b = B / 255.0f;

8. min = MIN2(r,MIN2(g,b));

9. max = MAX2(r,MAX2(g,b));

10. if (max == min)

11. *h = 0;

12. if (max == r && g >= b)

13. *h = 60.0f * (g - b) / (max - min);

14. if (max == r && g < b)

15. *h = 60.0f * (g - b) / (max - min) + 360.0f;

16.

17.

Sawahara Image Studio's artificial intelligence on hair style and the appropriate hairstyle to suggest with your outfit

The artificial intelligence hairdressing design product of Sawahara Image Studio is "Beauty JOYON Diagnostic System". Just use the camera to shoot the entire body, AI can classify the face balance, bones and so on with about 1000 people and recommend the best hair and clothes.

Sawahara Image Studio

Shampoo, hair dye, perm, haircut, reasonable price, international hairdresser master haircut, shampoo, hair dye, perm shampoo, hair dye, perm, haircut, reasonable price, international hairdresser master haircut, shampoo, hair dye, perm

According to the customer's hair quality, complete the customer hair care consultation, responsible for following up the customer's various affairs in the hair care process, assisting the customer service work; through in-depth understanding of the customer's scalp, hair and hair roots, make a hair Detailed analysis and diagnosis, and tailor-made hair care solutions and services for customers; tracking and observing each customer's hair care responsible for providing professional hairdressing services; paying attention to professional ethics, achieving civilized service, maintaining a high standard of hairdressing services To maintain the reputation of the store; regularly establish VIP and frequent guest files to understand their hobbies, requirements and hair characteristics, in order to better provide services; to complete the follow-up work of other work hair design arranged by the store manager, with hair style The teacher completes the process of softening, winding, and coloring the hair. Skilled in dyeing and ironing. Good image quality, team spirit, and sincerity. Hardworking, good attitude, and self-motivated. Receiving customers, and hairdressing consultation, washing, blowing, combing, and assisting the hairdresser to complete the hot dyeing work, and complete other tasks assigned by the hairdresser. Through the full massage of the acupuncture points and meridians of the human face, head, neck, shoulders, etc., the effect of relaxing the muscles and promoting blood circulation is achieved, and the scalp, hair follicles and capillaries are restored and the vitality is increased, thereby achieving the purpose of hair care and hair growth. Can independently complete hair cutting, hair blowing, styling, and other daily work

Dongyuan Phase I, Xincun East Road, Dafeng District, Yancheng City, Jiangsu Province

Contact Guolong qq 2668651154 Mobile 15358411774 WeChat gl760017400Face recognition, AR technology empowers the hairdressing industry

"Hey, teacher Tony, give me a new haircut. You see this is a photo of my goddess, just follow this cut!"

(after cutting...)

"Oh, ugly, Tony, are you not technical, send my hair back!"

Consumers and hairdressers always have a lot of "story" that is constantly being chaotic. In fact, in addition to imagination, consumers have no other way to know if they are suitable for a certain hairstyle, and can not consider the length, volume, hair, face and so on. Sometimes, when consumers communicate with the hairdresser, there will be some deviations, which will eventually lead to unsatisfactory results.

 

Developed an AR smart hairdressing system -  hair mirror. Through face recognition and AR technology, Magic Hair Mirror can accurately identify the user's face shape, recommend hairstyle to the user, and then use AR technology to accurately position the virtual hairstyle on the user's head, allowing the consumer to finish the 360° preview before the haircut. After the effect, avoid cutting the wrong hairstyle.

 

In addition,Hair Mirror also uses 3D hairstyle micro-customization technology, consumers can fine-tune the preview hairstyle, such as length, curl, hair volume, hair quality, etc., truly "what you see, see what you see. Income."

It has partnered with several well-known hair stylists to provide free hardware for them and to generate revenue share by pushing ads to consumers. Taking Japan as a starting point, we will enter the first-tier cities and cooperate with domestic brands with high brand awareness to attract other small and medium-sized organizations.

Launched a C-end app to provide consumers with hair style selection and try-on, and recommended hairdressers for drainage. At the same time, consumers can get information on various barbershops and hair stylists through the app and view other customers' reviews.

Face recognition, face data measurement, AR technology, 3D hairstyle customization technology is applied to the hairdressing industry to realize hairline online customization and real-life try-to-wear, providing complete solutions for hairdressing industry marketing and personalized customization services, and helping the hair industry to transform and upgrade.

The glasses developed by AR technology are used to test the smart machine to realize the unmanned retail of glasses. The products have been sold at home and abroad. In addition, the team has also developed AR test makeup, which has been successfully applied to the mainstream e-commerce platform. You can experience a variety of cosmetic makeup effects.

If (max == g)

18. *h = 60.0f * (b - r) / (max - min) + 120.0f;

19. if (max == b)

20. *h = 60.0f * (r - g) / (max - min) + 240.0f;

twenty one.

22. if (max == 0)

23. *s = 0;

24. else

25. *s = (max - min) / max;

26. *v = max;

27. };

28. void HSVToRGB(float h, float s, float v, int* R, int *G, int *B)

29. {

30. float q = 0, p = 0, t = 0, r = 0, g = 0, b = 0;

31. int hN = 0;

32. if (h < 0)

33. h = 360 + h;

34. hN = (int)(h / 60);

35. p = v * (1.0f - s);

36. q = v * (1.0f - (h / 60.0f - hN) * s);

37. t = v * (1.0f - (1.0f - (h / 60.0f - hN)) * s);

38. switch (hN)

39. {

40. case 0:

41. r = v;

42. g = t;

43. b = p;

44. break;

45. case 1:

46. ​​r = q;

47. g = v;

48. b = p;

49. break;

50. case 2:

51. r = p;

52. g = v;

53. b = t;

54. break;

55. case 3:

56. r = p;

57. g = q;

58. b = v;

59. break;

60. case 4:

61. r = t;

62. g = p;

63. b = v;

64. break;

65. case 5:

66. r = v;

67. g = p;

68. b = q;

69. break;

70. default:

71. break;

72. }

73. *R = (int)CLIP3((r * 255.0f), 0,255);

74. *G = (int)CLIP3((g * 255.0f), 0,255);

75. *B = (int)CLIP3((b * 255.0f), 0,255);

76. };

The effect diagram is as follows:

 

The effect of this algorithm on the beauty camera is as follows:

 

3 hair area enhancement

This step is mainly to highlight the details of the hair, you can use sharpening algorithms, such as Laplace sharpening, USM sharpening and so on.

The above process is basically the process of simulating the hair coloring algorithm of the beauty makeup camera. For your reference, the following gives some examples of the effects of the algorithm:

 

In addition to the normal monochrome dyeing, mixed color dyeing, the effect of this article is also highlighted, as shown in the bottom set of renderings.

For the principle of highlighting algorithms:

Calculate the hair texture, select the hair bundles that need to be highlighted according to the hair texture, and then separate the hair bundles from other hairs. The specific logic is no longer cumbersome. Everyone studies it yourself. Here are the solutions for your reference.

Finally, the algorithm in this paper is theoretically real-time processing is no problem, hair segmentation can be processed in real time, so there is basically no time-consuming operation in the back, using opengl to achieve real-time coloring is no problem.

 Deep learning AI beauty series---AI beauty skinning algorithm

First of all, why is this content "AI Beauty Skinning Algorithm One"? Instead of the "AI beauty dermabrasion algorithm"?

The AI ​​beauty skinning algorithm has not yet been specifically defined, and major companies are also in the process of exploration. Therefore, this is only based on their own implementation scheme. The algorithm of this paper and the next article "AI Beauty Skinning Algorithm II" The algorithm perspective has a big difference, and a distinction is made.

Let's take a look at the general flow of the dermabrasion algorithm:

 

This flow chart is a general traditional dermabrasion flow chart, and this article will be based on this flow chart, combined with deep learning to make some improvements.

In this flow chart, there are two main modules: a filtering module and a skin color area detecting module;

The filtering module contains three algorithms:

1, edge filter filter algorithm

The method refers to smoothing the image through a filter having the ability to retain the edge to achieve smooth skin;

The main types of filters are:

1 bilateral filter

2 steering filter

3Surface Blur surface blur filter

4 local mean filter

5 weighted least squares filter (WLS filter)

6Smart blur, etc. For details, please refer to my blog.

This method has a smoother skin area with less detail and requires later addition of detail information to preserve some natural textures;

2, high contrast weakening algorithm

The high contrast retention algorithm refers to the MASK obtained by high contrast to obtain the skin details. According to the detail area in the MASK, such as the position of the spot area in the skin, the corresponding area of ​​the original image is subjected to color dodging treatment, thereby achieving spot weakening and skin beauty. the goal of;

The method reduces the color of the skin blemishes and spots while retaining the texture, so that the skin looks smooth and natural;

3, other algorithms

Here are some unknown algorithms, of course, there are also known, such as: based on edge-preserving filtering and high-contrast microdermabrasion algorithm, this method also performs 1-2 steps on the original image to obtain a smooth filter map and high The contrast corresponds to the detail MASK, and then MASK is used as the alpha channel, and the original image and the filtered image are alpha-fused to achieve smooth skin while removing spots and retaining the texture;

Skin area recognition detection module

Currently commonly used skin tests are mainly based on color space skin color statistics methods;

The method has a high false detection rate, and it is easy to determine the skin color as the skin color, which causes the non-skin area image to be smoothed out by the filter, that is, the image area that is not to be dermabrasion is blurred;

The focus is on, below we use deep learning in the traditional dermabrasion algorithm to improve or improve the quality of our dermabrasion, such as: using deep learning to segment the skin area to get a more accurate skin area, thus making our final grinding The skin effect exceeds the effect of traditional algorithms;

Below, we introduce skin segmentation based on deep learning:

There are many ways to split, CNN/FCN/UNet/DenseNet, etc. Here we use UNet for skin segmentation:

Unet does image segmentation, reference papers such as: UNet: Convolutional Networks for Biomedical Image Segmentation.

Its initial network model is as follows:

 

This is a full convolutional neural network. The input and output are images. There is no fully connected layer. The shallower high resolution layer is used to solve the problem of pixel positioning. The deeper layer is used to solve the problem of pixel classification.

The convolution and downsampling are performed on the left side, while the current result is retained, and the upsampling result and the corresponding result on the left side are merged when the right side is upsampled, thereby improving the segmentation effect;

The left and right sides of this network are asymmetrical. Later, the improved Unet basically presents a symmetrical pattern in image resolution. This article uses Keras to implement the network structure as follows:

1. Layer (type) Output Shape Param # Connected to

2. ========================================================= ===========================================================

3. input_1 (InputLayer) (None, 256, 256, 3) 0

4. __________________________________________________________________________________________________

5. conv2d_1 (Conv2D) (None, 256, 256, 32) 896 input_1[0][0]

6. __________________________________________________________________________________________________

7. batch_normalization_1 (BatchNor (None, 256, 256, 32) 128 conv2d_1[0][0]

8. __________________________________________________________________________________________________

9. activation_1 (Activation) (None, 256, 256, 32) 0 batch_normalization_1[0][0]

10. __________________________________________________________________________________________________

11. conv2d_2 (Conv2D) (None, 256, 256, 32) 9248 activation_1[0][0]

12. __________________________________________________________________________________________________

13. batch_normalization_2 (BatchNor (None, 256, 256, 32) 128 conv2d_2[0][0]

14. __________________________________________________________________________________________________

15. activation_2 (Activation) (None, 256, 256, 32) 0 batch_normalization_2[0][0]

16. __________________________________________________________________________________________________

17. max_pooling2d_1 (MaxPooling2D) (None, 128, 128, 32) 0 activation_2[0][0]

18. __________________________________________________________________________________________________

19. conv2d_3 (Conv2D) (None, 128, 128, 64) 18496 max_pooling2d_1[0][0]

20. __________________________________________________________________________________________________

21. batch_normalization_3 (BatchNor (None, 128, 128, 64) 256 conv2d_3[0][0]

twenty two. __________________________________________________________________________________________________

23. activation_3 (Activation) (None, 128, 128, 64) 0Batch_normalization_3[0][0]

twenty four. __________________________________________________________________________________________________

25. conv2d_4 (Conv2D) (None, 128, 128, 64) 36928 activation_3[0][0]

26. __________________________________________________________________________________________________

27. batch_normalization_4 (BatchNor (None, 128, 128, 64) 256 conv2d_4[0][0]

28. __________________________________________________________________________________________________

29. activation_4 (Activation) (None, 128, 128, 64) 0 batch_normalization_4[0][0]

30. __________________________________________________________________________________________________

31. max_pooling2d_2 (MaxPooling2D) (None, 64, 64, 64) 0 activation_4[0][0]

32. __________________________________________________________________________________________________

33. conv2d_5 (Conv2D) (None, 64, 64, 128) 73856 max_pooling2d_2[0][0]

34. __________________________________________________________________________________________________

35. batch_normalization_5 (BatchNor (None, 64, 64, 128) 512 conv2d_5[0][0]

36. __________________________________________________________________________________________________

37. activation_5 (Activation) (None, 64, 64, 128) 0 batch_normalization_5[0][0]

38. __________________________________________________________________________________________________

39. conv2d_6 (Conv2D) (None, 64, 64, 128) 147584 activation_5[0][0]

40. __________________________________________________________________________________________________

41. batch_normalization_6 (BatchNor (None, 64, 64, 128) 512 conv2d_6[0][0]

42. __________________________________________________________________________________________________

43. activation_6 (Activation) (None, 64, 64, 128) 0 batch_normalization_6[0][0]

44. __________________________________________________________________________________________________

45. max_pooling2d_3 (MaxPooling2D) (None, 32, 32, 128) 0 activation_6[0][0]

46. ​​__________________________________________________________________________________________________

47. conv2d_7 (Conv2D) (None, 32, 32, 256) 295168 max_pooling2d_3[0][0]

48. __________________________________________________________________________________________________

49. batch_normalization_7 (BatchNor (None, 32, 32, 256) 1024 conv2d_7[0][0]

50. __________________________________________________________________________________________________

51. activation_7 (Activation) (None, 32, 32, 256) 0 batch_normalization_7[0][0]

52. __________________________________________________________________________________________________

53. conv2d_8 (Conv2D) (None, 32, 32, 256) 590080 activation_7[0][0]

54. __________________________________________________________________________________________________

55. batch_normalization_8 (BatchNor (None, 32, 32, 256) 1024 conv2d_8[0][0]

56. __________________________________________________________________________________________________

57. activation_8 (Activation) (None, 32, 32, 256) 0 batch_normalization_8[0][0]

58. __________________________________________________________________________________________________

59. max_pooling2d_4 (MaxPooling2D) (None, 16, 16, 256) 0 activation_8[0][0]

60. __________________________________________________________________________________________________

61. conv2d_9 (Conv2D) (None, 16, 16, 512) 1180160 max_pooling2d_4[0][0]

62. __________________________________________________________________________________________________

63. batch_normalization_9 (BatchNor (None, 16, 16, 512) 2048 conv2d_9[0][0]

64. __________________________________________________________________________________________________

65. activation_9 (Activation) (None, 16, 16, 512) 0 batch_normalization_9[0][0]

66. __________________________________________________________________________________________________

67. conv2d_10 (Conv2D) (None, 16, 16, 512) 2359808 activation_9[0][0]

68. __________________________________________________________________________________________________

69. batch_normalization_10 (BatchNo (None, 16, 16, 512) 2048 conv2d_10[0][0]

70. __________________________________________________________________________________________________

71. activation_10 (Activation) (None, 16, 16, 512) 0 batch_normalization_10[0][0]

72. __________________________________________________________________________________________________

73. max_pooling2d_5 (MaxPooling2D) (None, 8, 8, 512) 0 activation_10[0][0]

74. __________________________________________________________________________________________________

75. conv2d_11 (Conv2D) (None, 8, 8, 1024) 4719616 max_pooling2d_5[0][0]

76. __________________________________________________________________________________________________

77. batch_normalization_11 (BatchNo (None, 8, 8, 1024) 4096 conv2d_11[0][0]

78. __________________________________________________________________________________________________

79. activation_11 (Activation) (None, 8, 8, 1024) 0 batch_normalization_11[0][0]

80. __________________________________________________________________________________________________

81. conv2d_12 (Conv2D) (None, 8, 8, 1024) 9438208 activation_11[0][0]

82. __________________________________________________________________________________________________

83. batch_normalization_12 (BatchNo (None, 8, 8, 1024) 4096 conv2d_12[0][0]

84. __________________________________________________________________________________________________

85. activation_12 (Activation) (None, 8, 8, 1024) 0 batch_normalization_12[0][0]

86. __________________________________________________________________________________________________

87. up_sampling2d_1 (UpSampling2D) (None, 16, 16, 1024) 0 activation_12[0][0]

88. __________________________________________________________________________________________________

89. concatenate_1 (Concatenate) (None, 16, 16, 1536) 0 activation_10[0][0]

90. up_sampling2d_1[0][0]

91. __________________________________________________________________________________________________

92. conv2d_13 (Conv2D) (None, 16, 16, 512) 7078400 concatenate_1[0][0]

93. __________________________________________________________________________________________________

94. batch_normalization_13 (BatchNo (None, 16, 16, 512) 2048 conv2d_13[0][0]

95. __________________________________________________________________________________________________

96. activation_13 (Activation) (None, 16, 16, 512) 0 batch_normalization_13[0][0]

97. __________________________________________________________________________________________________

98. conv2d_14 (Conv2D) (None, 16, 16, 512) 2359808 activation_13[0][0]

99. __________________________________________________________________________________________________

100. batch_normalization_14 (BatchNo (None, 16, 16, 512) 2048 conv2d_14[0][0]

101. __________________________________________________________________________________________________

102. activation_14 (Activation) (None, 16, 16, 512) 0 batch_normalization_14[0][0]

103. __________________________________________________________________________________________________

104. conv2d_15 (Conv2D) (None, 16, 16, 512) 2359808 activation_14[0][0]

105. __________________________________________________________________________________________________

106. batch_normalization_15 (BatchNo (None, 16, 16, 512) 2048 conv2d_15[0][0]

107. __________________________________________________________________________________________________

108. activation_15 (Activation) (None, 16, 16, 512) 0 batch_normalization_15[0][0]

109. __________________________________________________________________________________________________

110. up_sampling2d_2 (UpSampling2D) (None, 32, 32, 512) 0 activation_15[0][0]

111. __________________________________________________________________________________________________

112. concatenate_2 (Concatenate) (None, 32, 32, 768) 0 activation_8[0][0]

113. up_sampling2d_2[0][0]

114. __________________________________________________________________________________________________

115. conv2d_16 (Conv2D) (None, 32, 32, 256) 1769728 concatenate_2[0][0]

116. __________________________________________________________________________________________________

117. batch_normalization_16 (BatchNo (None, 32, 32, 256) 1024 conv2d_16[0][0]

118. __________________________________________________________________________________________________

119. activation_16 (Activation) (None, 32, 32, 256) 0 batch_normalization_16[0][0]

120. __________________________________________

________________________________________________________

121. conv2d_17 (Conv2D) (None, 32, 32, 256) 590080 activation_16[0][0]

122. __________________________________________________________________________________________________

123. batch_normalization_17 (BatchNo (None, 32, 32, 256) 1024 conv2d_17[0][0]

124. __________________________________________________________________________________________________

125. activation_17 (Activation) (None, 32, 32, 256) 0 batch_normalization_17[0][0]

126. __________________________________________________________________________________________________

127. conv2d_18 (Conv2D) (None, 32, 32, 256) 590080 activation_17[0][0]

128. __________________________________________________________________________________________________

129. batch_normalization_18 (BatchNo (None, 32, 32, 256) 1024 conv2d_18[0][0]

130. __________________________________________________________________________________________________

131. activation_18 (Activation) (None, 32, 32, 256) 0 batch_normalization_18[0][0]

132. __________________________________________________________________________________________________

133. up_sampling2d_3 (UpSampling2D) (None, 64, 64, 256) 0 activation_18[0][0]

134. __________________________________________________________________________________________________

135. concatenate_3 (Concatenate) (None, 64, 64, 384) 0 activation_6[0][0]

136. up_sampling2d_3[0][0]

137. __________________________________________________________________________________________________

138. conv2d_19 (Conv2D) (None, 64, 64, 128) 442496 concatenate_3[0][0]

139. __________________________________________________________________________________________________

140. batch_normalization_19 (BatchNo (None, 64, 64, 128) 512 conv2d_19[0][0]

141. __________________________________________________________________________________________________

142. activation_19 (Activation) (None, 64, 64, 128) 0 batch_normalization_19[0][0]

143. __________________________________________________________________________________________________

144. conv2d_20 (Conv2D) (None, 64, 64, 128) 147584 activation_19[0][0]

145. __________________________________________________________________________________________________

146. batch_normalization_20 (BatchNo (None, 64, 64, 128) 512 conv2d_20[0][0]

147. __________________________________________________________________________________________________

148. activation_20 (Activation) (None, 64, 64, 128) 0 batch_normalization_20[0][0]

149. __________________________________________________________________________________________________

150. conv2d_21 (Conv2D) (None, 64, 64, 128) 147584 activation_20[0][0]

151. __________________________________________________________________________________________________

152. batch_normalization_21 (BatchNo (None, 64, 64, 128) 512 conv2d_21[0][0]

153. __________________________________________________________________________________________________

154. activation_21 (Activation) (None, 64, 64, 128) 0 batch_normalization_21[0][0]

155. __________________________________________________________________________________________________

156. up_sampling2d_4 (UpSampling2D) (None, 128, 128, 128 0 activation_21[0][0]

157. __________________________________________________________________________________________________

158. concatenate_4 (Concatenate) (None, 128, 128, 192 0 activation_4[0][0]

159. up_sampling2d_4[0][0]

160. __________________________________________________________________________________________________

161. conv2d_22 (Conv2D) (None, 128, 128, 64) 110656 concatenate_4[0][0]

162. __________________________________________________________________________________________________

163. batch_normalization_22 (BatchNo (None, 128, 128, 64) 256 conv2d_22[0][0]

164. __________________________________________________________________________________________________

165. activation_22 (Activation) (None, 128, 128, 64) 0 batch_normalization_22[0][0]

166. __________________________________________________________________________________________________

167. conv2d_23 (Conv2D) (None, 128, 128, 64) 36928 activation_22[0][0]

168. __________________________________________________________________________________________________

169. batch_normalization_23 (BatchNo (None, 128, 128, 64) 256 conv2d_23[0][0]

170. __________________________________________________________________________________________________

171. activation_23 (Activation) (None, 128, 128, 64) 0 batch_normalization_23[0][0]

172. __________________________________________________________________________________________________

173. conv2d_24 (Conv2D) (None, 128, 128, 64) 36928 activation_23 [0] [0]

174. __________________________________________________________________________________________________

175. batch_normalization_24 (BatchNo (None, 128, 128, 64) 256 conv2d_24[0][0]

176. __________________________________________________________________________________________________

177. activation_24 (Activation) (None, 128, 128, 64) 0 batch_normalization_24 [0] [0]

178. __________________________________________________________________________________________________

179. up_sampling2d_5 (UpSampling2D) (None, 256, 256, 64) 0 activation_24[0][0]

180. __________________________________________________________________________________________________

181. concatenate_5 (Concatenate) (None, 256, 256, 96) 0 activation_2[0][0]

182. up_sampling2d_5[0][0]

183. __________________________________________________________________________________________________

184. conv2d_25 (Conv2D) (None, 256, 256, 32) 27680 concatenate_5[0][0]

185. __________________________________________________________________________________________________

186. batch_normalization_25 (BatchNo (None, 256, 256, 32) 128 conv2d_25[0][0]

187. __________________________________________________________________________________________________

188. activation_25 (Activation) (None, 256, 256, 32) 0 batch_normalization_25[0][0]

189. __________________________________________________________________________________________________

190. conv2d_26 (Conv2D) (None, 256, 256, 32) 9248 activation_25[0][0]

191. __________________________________________________________________________________________________

192. batch_normalization_26 (BatchNo (None, 256, 256, 32) 128 conv2d_26[0][0]

193. __________________________________________________________________________________________________

194. activation_26 (Activation) (None, 256, 256, 32) 0 batch_normalization_26[0][0]

195. __________________________________________________________________________________________________

196. conv2d_27 (Conv2D) (None, 256, 256, 32) 9248 activation_26[0][0]

197. __________________________________________________________________________________________________

198. batch_normalization_27 (BatchNo (None, 256, 256, 32) 128 conv2d_27[0][0]

199. __________________________________________________________________________________________________

200. activation_27 (Activation) (None, 256, 256, 32) 0 batch_normalization_27[0][0]

201. __________________________________________________________________________________________________

202. conv2d_28 (Conv2D) (None, 256, 256, 1) 33 activation_27[0][0]

203. ================================================================= ===========================================================

The UNet network code is as follows:

1. def get_unet_256(input_shape=(256, 256, 3),

2. num_classes=1):

3. inputs = Input(shape=input_shape)

4. # 256

5.

6. down0 = Conv2D(32, (3, 3), padding='same')(inputs)

7. down0 = BatchNormalization()(down0)

8. down0 = Activation('relu')(down0)

9. down0 = Conv2D(32, (3, 3), padding='same')(down0)

10. down0 = BatchNormalization()(down0)

11. down0 = Activation('relu')(down0)

12. down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)

13. # 128

14.

15. down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)

16. down1 = BatchNormalization()(down1)

17. down1 = Activation('relu')(down1)

18. down1 = Conv2D(64, (3, 3), padding='same')(down1)

19. down1 = BatchNormalization()(down1)

20. down1 = Activation('relu')(down1)

21. down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)

22. # 64

twenty three.

24. down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)

25. down2 = BatchNormalization()(down2)

26. down2 = Activation('relu')(down2)

27. down2 = Conv2D(128, (3, 3), padding='same')(down2)

28. down2 = BatchNormalization()(down2)

29. down2 = Activation('relu')(down2)

30. down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)

31. # 32

32.

33. down3 = Conv2D (256,(3, 3), padding='same')(down2_pool)

34. down3 = BatchNormalization()(down3)

35. down3 = Activation('relu')(down3)

36. down3 = Conv2D(256, (3, 3), padding='same')(down3)

37. down3 = BatchNormalization()(down3)

38. down3 = Activation('relu')(down3)

39. down3_pool = MaxPooling2D((2, 2), strides=(2, 2))(down3)

40. # 16

41.

42. down4 = Conv2D(512, (3, 3), padding='same')(down3_pool)

43. down4 = BatchNormalization()(down4)

44. down4 = Activation('relu')(down4)

45. down4 = Conv2D(512, (3, 3), padding='same')(down4)

46. ​​down4 = BatchNormalization()(down4)

47. down4 = Activation('relu')(down4)

48. down4_pool = MaxPooling2D((2, 2), strides=(2, 2))(down4)

49. # 8

50.

51. center = Conv2D(1024, (3, 3), padding='same')(down4_pool)

52. center = BatchNormalization()(center)

53. center = Activation('relu')(center)

54. center = Conv2D(1024, (3, 3), padding='same')(center)

55. center = BatchNormalization()(center)

56. center = Activation('relu')(center)

57. # center

58.

59. up4 = UpSampling2D((2, 2))(center)

60. up4 = concatenate([down4, up4], axis=3)

61. up4 = Conv2D(512, (3, 3), padding='same')(up4)

62. up4 = BatchNormalization()(up4)

63. up4 = Activation('relu')(up4)

64. up4 = Conv2D(512, (3, 3), padding='same')(up4)

65. up4 = BatchNormalization()(up4)

66. up4 = Activation('relu')(up4)

67. up4 = Conv2D(512, (3, 3), padding='same')(up4)

68. up4 = BatchNormalization()(up4)

69. up4 = Activation('relu')(up4)

70. # 16

71.72. up3 = UpSampling2D((2, 2))(up4)

73. up3 = concatenate([down3, up3], axis=3)

74. up3 = Conv2D(256, (3, 3), padding='same')(up3)

75. up3 = BatchNormalization()(up3)

76. up3 = Activation('relu')(up3)

77. up3 = Conv2D(256, (3, 3), padding='same')(up3)

78. up3 = BatchNormalization()(up3)

79. up3 = Activation('relu')(up3)

80. up3 = Conv2D(256, (3, 3), padding='same')(up3)

81. up3 = BatchNormalization()(up3)

82. up3 = Activation('relu')(up3)

83. # 32

84.

85. up2 = UpSampling2D((2, 2))(up3)

86. up2 = concatenate([down2, up2], axis=3)

87. up2 = Conv2D(128, (3, 3), padding='same')(up2)

88. up2 = BatchNormalization()(up2)

89. up2 = Activation('relu')(up2)

90. up2 = Conv2D(128, (3, 3), padding='same')(up2)

91. up2 = BatchNormalization()(up2)

92. up2 = Activation('relu')(up2)

93. up2 = Conv2D(128, (3, 3), padding='same')(up2)

94. up2 = BatchNormalization()(up2)

95. up2 = Activation('relu')(up2)

96. # 64

97.

98. up1 = UpSampling2D((2, 2))(up2)

99. up1 = concatenate([down1, up1], axis=3)

100. up1 = Conv2D(64, (3, 3), padding='same')(up1)

101. up1 = BatchNormalization()(up1)

102. up1 = Activation('relu')(up1)

103. up1 = Conv2D(64, (3, 3), padding='same')(up1)

104. up1 = BatchNormalization()(up1)

105. up1 = Activation('relu')(up1)

106. up1 = Conv2D(64, (3, 3), padding='same')(up1)

107. up1 = BatchNormalization()(up1)

108. up1 = Activation('relu')(up1)

109. # 128

110.

111. up0 = UpSampling2D((2, 2))(up1)

112. up0 = concatenate([down0, up0], axis=3)

113. up0 = Conv2D(32, (3, 3), padding='same')(up0)

114. up0 = BatchNormalization()(up0)

115. up0 = Activation('relu')(up0)

116. up0 = Conv2D(32, (3, 3), padding='same')(up0)

117. up0 = BatchNormalization()(up0)

118. up0 = Activation('relu')(up0)

119. up0 = Conv2D(32, (3, 3), padding='same')(up0)

120. up0 = BatchNormalization()(up0)

121. up0 = Activation('relu')(up0)

122. # 256

123.

124. classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(up0)

125.

126. model = Model(inputs=inputs, outputs=classify)

127.

128. #model.compile(optimizer=RMSprop(lr=0.0001), loss=bce_dice_loss, metrics=[dice_coeff])

129.

130. return model

The input is a color map of 256X256X3, and the output is MASX of 256X256X1. The training parameters are as follows:

1. model.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ["accuracy"])

2.

3. model.fit(image_train, label_train, epochs=100, verbose=1, validation_split=0.2, shuffle=True, batch_size=8)

The effect chart is as follows:

 

I am here to concentrate on the sample calibration. I use the face area as the skin color area. Therefore, the facial features area is not excluded. If you want to get the skin area that does not contain facial features, you only need to replace the corresponding sample.

With the accurate skin tone area, we can update the microdermabrasion algorithm, which gives a set of renderings:

 

 

As you can see, the traditional dermabrasion algorithm based on color space can't accurately distinguish the skin area from the skin-like area, so the skin is also dermabrasion, which leads to the loss of hair texture details, and the skin based on Unet skin segmentation. The algorithm can distinguish the skin color area such as skin and hair well, and then retain the texture details of the hair to achieve the skin grinding in the place where the microdermabrasion is not worn. The effect is obviously better than the traditional method.

At present, the mainstream companies such as Meitu Xiuxiu, Tiantian P and so on have also used the algorithm based on deep learning skin segmentation to improve the effect of microdermabrasion. Here is a brief introduction to help you better understand.

Of course, using the deep learning method to improve the traditional method is just a model, so the article title is AI beauty dermabrasion algorithm one. In AI beauty dermabrasion algorithm II, I will completely abandon the traditional method, and finish

The American industry AI on the tuyere, the robot has found a new breakthrough in the US industry.

The American industry AI on the tuyere, the robot has found a new breakthrough in the US industry.

 

Recently, the professional assistant APP launched a new function of “robot”, which uses artificial intelligence technology to help hair stylists analyze customer's face and facial features, and gives suggestions for hair styling in line with popular aesthetics. It is reported that the robot is the first company in the hairdressing industry to put artificial intelligence technology into the hairdressing scene. The combination of science and aesthetics is creative, creating a precedent for the combination of artificial intelligence and hairdressing.

We often hear some hair stylists complain that the price of the house has been turned over several times. Why does the hairdressing price increase a lot of dislike? The most fundamental reason is not that the customer thinks the price is high, but the hair stylist raises the price. The service provided to the guests did not change significantly.

With the improvement of economic income, the need for hair style is no longer the same as who and who, but needs to be personalized, and needs to be different from others. It is best that this hairstyle is only available to customers, and is customized for customers. of. “Private customization is just a matter for guests. “One person, one design” is also an inevitable trend of consumption upgrades.”

However, it is understood that there are still a large number of hair stylist partners who do not have the ability to design privately. Only a few can do "one person, one design", and many still take the three or five hair style sets they have just learned. go with. This obviously does not meet the needs of the guests.