AI ETHICS IN RETAIL

AI marketing has been disruptive within the retail trade by providing unnumbered edges to retailers of all sizes, from Fortune five hundred corporations to tiny e-commerce outlets. These edges embrace saving time and cash, whereas partaking with a bigger set of consumers to extend revenue.

This is as a result of AI's having the facility to research billions of information points within the blink of a watch and translate them into unjust insights. For somebody, this could take a completely different period of time. However, with AI, marketers will better perceive what elements drive the best results.

AI will create endless prospects for targeting, partaking, and changing potential customers. However, with this power comes the responsibility to be moral. With the lack of universal trade standards, it’s up to leaders to make the onerous selections and implement the proper pathway for AI integration. To confirm you’re deploying AI ethically in your selling efforts, ensure you try to do the following:

Establish best practices. Since AI adoption remains in its infancy, several firms lack strategic target integration, which might cause moral problems. To handle this, ensure you ascertain best practises ahead of time to confirm that AI works objectively. This includes understanding how AI learns, how it prescribes tags to photographs and words, and the way information closes to serving recommendations to users. Create various groups. Firms that integrate AI ought to guarantee it reflects the range of their users. Committing to diversity and illustration permits the humans behind AI to bring varied views and raise necessary queries. As a result, the AI resolutions are as ethically sound and unbiased as possible, enabling them to seek out the most effective solutions for their users.

Reinforce learning. To create the most effective AI resolution attainable, reinforcement learning is critical. With reinforcement learning, developers will reward AI once it self-corrects mistakes and once its outcomes align with a lot of moral approaches to processing. This enables marketers and developers to coach AI to be a lot more moral, which successively makes it less subjective.

It’s important to be open and honest regarding your readying of AI with customers. This suggests responding to any questions a client has regarding the technology. Trying to cover the actual fact you’re victimisation AI may destroy trust with a client, whereas being clear will build a lot of patience for the technology.

The biggest concern in AI ethics: As brands and marketers, we tend to should comprehend the human aspect of what we’re doing and build higher pathways to moral AI. While AI will have a profound impact, we want to confirm the moral aspect of it being self-addressed to confirm that this technology is ever-changing industries for the better.

Benefits of AI Applications in Education

Personalized learning systems, automated assessments, facial recognition systems, chatbots (social media sites), and predictive analytics tools are being deployed increasingly in K-12 educational settings; they are powered by machine-learning systems and algorithms. These applications of AI have shown promise to support teachers and students in various ways: (a) providing instruction in mixed-ability classrooms, (b) providing students with detailed and timely feedback on their writing products, (c) freeing teachers from the burden of possessing all knowledge and giving them more room to support their students while they are observing, discussing, and gathering information in their collaborative knowledge-building processes. Below, we outline benefits of each of these educational applications in the K-12 setting before turning to a synthesis of their ethical challenges and drawbacks.

Personalized learning systems: Personalized learning systems, also known as adaptive learning platforms or intelligent tutoring systems, are one of the most common and valuable applications of AI to support students and teachers. They provide students access to different learning materials based on their individual learning needs and subjects. For example, rather than practicing chemistry on a worksheet or reading a textbook, students may use an adaptive and interactive multimedia version of the course content. Comparing students’ scores on researcher-developed or standardized tests, research shows that the instruction based on personalized learning systems resulted in higher test scores than traditional teacher-led instruction. Microsoft’s recent report (2018) of over 2000 students and teachers from Singapore, the U.S., the UK, and Canada shows that AI supports students’ learning progressions. These platforms promise to identify gaps in students’ prior knowledge by accommodating learning tools and materials to support students’ growth. These systems generate models of learners using their knowledge and cognition; however, the existing platforms do not yet provide models for learners’ social, emotional, and motivational states. Considering the shift to remote K-12 education during the COVID-19 pandemic, personalized learning systems offer a promising form of distance learning that could reshape K-12 instruction for the future.

Automated assessment systems: Automated assessment systems are becoming one of the most prominent and promising applications of machine learning in K-12 education. These scoring algorithm systems are being developed to meet the need for scoring students’ writing, exams and assignments, and tasks usually performed by the teacher. Assessment algorithms can provide course support and management tools to lessen teachers’ workload, as well as extend their capacity and productivity. Ideally, these systems can provide levels of support to students, as their essays can be graded quickly. Providers of the biggest open online courses such as Coursera and EdX have integrated automated scoring engines into their learning platforms to assess the writings of hundreds of students. On the other hand, a tool called “Gradescope” has been used by over 500 universities to develop and streamline scoring and assessment. By flagging the wrong answers and marking the correct ones, the tool supports instructors by eliminating their manual grading time and effort. Thus, automated assessment systems deal very differently with marking and giving feedback to essays compared to numeric assessments which analyze right or wrong answers on the test. Overall, these scoring systems have the potential to deal with the complexities of the teaching context and support students’ learning process by providing them with feedback and guidance to improve and revise their writing.

Facial recognition systems and predictive analytics: Facial recognition software is used to capture and monitor students’ facial expressions. These systems provide insights about students’ behaviors during learning processes and allow teachers to take action or intervene, which, in turn, helps teachers develop learner-centered practices and increase student’s engagement. Predictive analytics algorithm systems are mainly used to identify and detect patterns about learners based on statistical analysis. For example, these analytics can be used to detect university students who are at risk of failing or not completing a course. Through these identifications, instructors can intervene and get students the help they need.

Social networking sites and chatbots: Social networking sites (SNSs) connect students and teachers through social media outlets. Researchers have emphasized the importance of using SNSs (such as Facebook) to expand learning opportunities beyond the classroom, monitor students’ well-being, and deepen student–teacher relations. Different scholars have examined the role of social media in education, describing its impact on student and teacher learning and scholarly communication. They point out that the integration of social media can foster students’ active learning, collaboration skills, and connections with communities beyond the classroom. Chatbots also take place in social media outlets through different AI systems. They are also known as dialogue systems or conversational agents. Chatbots are helpful in terms of their ability to respond naturally with a conversational tone. For instance, a text-based chatbot system called “Pounce” was used at Georgia State University to help students through the registration and admission process, as well as financial aid and other administrative tasks

In conclusion, applications of AI can positively impact students’ and teachers’ educational experiences and help them address instructional challenges and concerns. On the other hand, AI cannot be a substitute for human interaction. Students have a wide range of learning styles and needs. Although AI can be a time-saving and cognitive aide for teachers, it is but one tool in the teachers’ toolkit. Therefore, it is critical for teachers and students to understand the limits, potential risks, and ethical drawbacks of AI applications in education if they are to reap the benefits of AI and minimize the costs.

Ethical Concerns and Potential Risks

One of the biggest ethical issues surrounding the use of AI in K-12 education relates to the privacy concerns of students and teachers. Privacy violations mainly occur as people expose an excessive amount of personal information in online platforms. Although existing legislation and standards exist to protect sensitive personal data, AI-based tech companies’ violations with respect to data access and security increase people’s privacy concerns. To address these concerns, AI systems ask for users’ consent to access their personal data. Although consent requests are designed to be protective measures and to help alleviate privacy concerns, many individuals give their consent without knowing or considering the extent of the information (metadata) they are sharing, such as the language spoken, racial identity, biographical data, and location. Such uninformed sharing in effect undermines human agency and privacy. In other words, people’s agency diminishes as AI systems reduce introspective and independent thought. Relatedly, scholars have raised the ethical issue of forcing students and parents to use these algorithms as part of their education even if they explicitly agree to give up privacy. They really have no choice if these systems are required by public schools.

Another ethical concern surrounding the use of AI in K-12 education is surveillance or tracking systems which gather detailed information about the actions and preferences of students and teachers. Through algorithms and machine-learning models, AI tracking systems not only necessitate monitoring of activities but also determine the future preferences and actions of their users. Surveillance mechanisms can be embedded into AI’s predictive systems to foresee students’ learning performances, strengths, weaknesses, and learning patterns. For instance, research suggests that teachers who use social networking sites (SNSs) for pedagogical purposes encounter a number of problems, such as concerns in relation to boundaries of privacy, friendship authority, as well as responsibility and availability. While monitoring and patrolling students’ actions might be considered part of a teacher’s responsibility and a pedagogical tool to intervene in dangerous online cases (such as cyber-bullying or exposure to sexual content), such actions can also be seen as surveillance systems which are problematic in terms of threatening students’ privacy. Monitoring and tracking students’ online conversations and actions also may limit their participation in the learning event and make them feel unsafe to take ownership for their ideas. How can students feel secure and safe, if they know that AI systems are used for surveilling and policing their thoughts and actions?.

Problems also emerge when surveillance systems trigger issues related to autonomy, more specifically, the person’s ability to act on her or his own interest and values. Predictive systems which are powered by algorithms jeopardize students and teachers’ autonomy and their ability to govern their own life. Use of algorithms to make predictions about individuals’ actions based on their information raise questions about fairness and self-freedom. Therefore, the risks of predictive analysis also include the perpetuation of existing bias and prejudices of social discrimination and stratification.

Finally, bias and discrimination are critical concerns in debates of AI ethics in K-12 education. In AI platforms, the existing power structures and biases are embedded into machine-learning models. Gender bias is one of the most apparent forms of this problem, as the bias is revealed when students in language learning courses use AI to translate between a gender-specific language and one that is less-so. For example, while Google Translate translated the Turkish equivalent of “She/he is a nurse” into the feminine form, it also translated the Turkish equivalent of “She/he is a doctor” into the masculine form. This shows how AI models in language translation carry the societal biases and gender-specific stereotypes in the data. Similarly, a number of problematic cases of racial bias are also associated with AI’s facial recognition systems. Research shows that facial recognition software has improperly misidentified a number of African American and Latino American people as convicted felons.

Thoughts

Dont Forget To Share
Your Thoughts.

Please feel free to leave your comments here, All your comments are welcomed and appreciated.


Share Thoughts

AI ETHICS

As AI gets more intelligent and capable of doing more complicated human activities, it will become more difficult to monitor, validate, anticipate, and explain their behaviour.


Stay tunned:

* we promise that we won´t spam you, never.