31272 Project Management and the Professional UTS 代写

  • 100%原创包过,高质代写&免费提供Turnitin报告--24小时客服QQ&微信:120591129
  • 31272 Project Management and the Professional UTS 代写


    31272 PMP Assignment 1
    1
    UNIVERSITY OF TECHNOLOGY SYDNEY
    31272 Project Management and the Professional
    Assignment 1 – Spring 2017
    Marks:  20 marks (20%)
    Submission Components: Softcopy Report
    Submission Due:  6pm Wednesday, 6 September 2017
    Report Submission To:  UTSonline (softcopy)
    Length:  2,200 -2,500words (of report body)
    Anticipated Commitment:  12 hours per student 
    Objectives/Graduate Attributes:  4 / A1, B6, E1, F2
    This is an individual assignment
    Technology development is occurring swiftly. Its sophistication has reached levels where
    introduction of the latest advances has prompted concerns over their use (or abuse). One such
    area is the advent of driverless vehicles and potential concerns for undertaking road travel with
    little or no human oversight. While the efficiency of robot cars might be apparent the ethics
    behind such a use of technology could also be open to debate.
    Background
    In this assignment you will discuss the ethics of a real-life situation as reported in the public
    domain (see the attached case study). In this case, the potential introduction of automated
    vehicles and the attendant removal of human decision-making from overland travel. There are
    many potential issues involved in this situation. As a project management student who may in
    future help create algorithms and software guiding such technologies, concerns over use, risk
    and outcomes could be quite relevant to your long-term roles/activities within the IS/IT sector.
    While the original text for the case study news reports can be found in the reference provided
    you are also expected to further research and investigate this topic for yourself as needed.
    Tasks and Assessment
    This assessment requires that you prepare a report conceptualising the problem, finding relevant
    references for context and to come up with appropriate points of view. Your positions should be
    supported with reasoned argument (and citations where needed). Marks will also be awarded for
    professionalism of response.
    1) Ethical Analysis (12 marks)
    a) Identify and outline at least 3 ethical issues that may be associated with the situation as
    described in the article. Why do you believe that these are (or could be) ethical concerns?
    b) For the 3 ethical issues nominated, identify the key stakeholders. For each stakeholder
    attempt to describe the situation and reasoning from their perspective.
    c) For each stakeholder group identified, select what you believe is the relevant ethical view
    they have likely taken (i.e. choose from the list of 7 ‘ethical principles’ given on slide 10
    of lecture 2) – and then briefly explain why you attributed that ethical stance to them.
    d) For the 3 issues you have outlined assess your personal values and then, for each issue,
    determine your own closest fit to the list of 7 ‘ethical principles’. Comparing your view to
    the associated stakeholder positions, discuss which stakeholder (in your ethical
    judgement) is most likely to be ‘right’. Why do you think this?
    31272 PMP Assignment 1
    2
    2) Researching International Codes of Ethics (6 marks)
    Research the ethics and codes of conduct for Information Technology industry bodies
    representing Australia (i.e. Australian Computer Society (ACS)) plus two other national or
    international groups (e.g. Association for Computer Machinery (ACM), Institute of
    Electrical and Electronics Engineers (IEEE), British Computer Society (BCS), Computer
    Society of India (CSI), Institute of IT Professionals New Zealand (IITP), etc.).
    Answer the following:
    a) In your opinion, and supplying reasoning, what evaluation would each Code most likely
    render regarding the use of automated vehicles and the control systems governing them?
    b) Compare the three ethical codes of conduct. What are the major differences and
    similarities between the three codes you have examined in regard to the case study
    subject matter? Why do you believe that these differences, similarities or areas of
    conflict are present?
    Where needed, justify your answer with specific references to items within both the case
    study and the Codes themselves.
    3) Report Professionalism and Communication (2 marks)
    The report should be written as if meant for a professional audience and not just as an
    attempt to satisfy an academic course requirement. It should communicate clearly, exhibit
    attention to detail and present itself to a high standard, including:
     Good document structure including:
     Title page (student name/number, tutor name, tutorial number, title, submission date);
     Report introduction;
     Numbered headings for each section;
     Report conclusion;
     Reference list page;
     Additional appendices (if needed).
    Report should have numbered pages and good English expression (including punctuation
    and spelling). FEIT cover sheet should be included at the front of the submission.
     Clarity and insight - suitable word count (not counting title page, reference list, etc.),
    deals properly with each topic without verbosity, shows a depth of analysis;
     Appropriate use of quotes, statistics and diagrams (where applicable) backed by properly
    cited sources. All references should be noted in the reference list at the end of your report
    and employ correct Harvard/UTS format.
    Note for Repeating Students
    If you previously attempted 31272 in Spring 2016 or Autumn 2017 then you may re-use your
    mark from that time in place of undertaking this assignment. If so, you MUST email the Subject
    Coordinator with your request by 5pm, 25 August 2017. Return confirmation should be kept.
    Failure to obtain written approval by this time/date means this assignment is to be carried out.
    Report Submission
    Submission Requirements
    Assignments are required in softcopy submitted to Turnitin via the 'Assignments' tab of
    UTSOnline for grading and plagiarism checking. Assignments are expected to be assessed
    and graded by week 10.
    Late Penalty
    Late submission will attract a two mark penalty per calendar day. Submissions more than
    five days late will not be accepted and receive zero unless special consideration has been
    sought from, and granted by, the Subject Co-ordinator prior to the due date.
    31272 PMP Assignment 1
    3
    Referencing Standards
    All material derived from other works must be acknowledged and referenced appropriately
    using the Harvard/UTS Referencing Style. For more information see:
    http://www.lib.uts.edu.au/help/referencing/harvard-uts-referencing-guide
    Originality of Submitted Work
    Students are reminded of the principles laid down in the "Statement of Good Practice and
    Ethics in Informal Assessment" (in the Faculty Handbook). Unless otherwise stated in a
    specific handout, all assessment tasks in this subject should be your own original work. Any
    collaboration with another student (or group) should be limited to those matters described in
    "Acceptable Behaviour" section of the Handbook. For essay questions, students should pay
    particular attention to the recognition of "Plagiarism" as described in that section of the
    Handbook. Any infringement by a student will be considered a breach of discipline and will
    be dealt with in accordance with Rules and By-Laws of the University. Penalties such as
    zero marks for assignments or subjects may be imposed.
    Improve Your Academic and English Language Skills
    HELPS (Higher Education Language and Presentation Support) Service provides assistance
    with English proficiency and academic language. Students needing to develop their written
    and/or spoken English can make use of the free services offered by HELPS, including
    academic language workshops, vacation courses, drop-in consultations, individual
    appointments and Conversations@UTS (www.ssu.uts.edu.au/helps). HELPS is located in
    Student Services on level 3 of building 1, City campus (phone 9514-2327).
    The Faculty of Engineering and IT intranet (MyFEIT):
    http://my.feit.uts.edu.au/myfeit)
    and Faculty Student Guide:
    http://my.feit.uts.edu.au/modules/myfeit/downloads/StudentGuide_Online.pdf
    provide information about services and support available to students within the Faculty.
    Useful Hints for This Assignment
    1. ACS code of ethics and code of professional conduct can be found at:
    https://www.acs.org.au/content/dam/acs/rules-and-regulations/Code-of-Professional-
    Conduct_v2.1.pdf
    2. The UTS Library on-line Journal Database may help with your research. It is
    accessible from http://www.lib.uts.edu.au/databases/search_databases.py. You need to
    activate your UTS e-mail account (http://webmail.uts.edu.au/ ) in order access the
    resource.
    31272 PMP Assignment 1
    4
    Assignment 1 Case Study
    Self-Driving Vehicles
    Example 1
    Here's How Tesla Solves A Self-Driving Crash Dilemma
    By Patrick Lin (contributor)
    (Forbes, 5 April 2017)
    https://www.forbes.com/sites/patricklin/2017/04/05/heres-how-tesla-solves-a-self-driving-
    crash-dilemma/#1a9f70cf6813
    With very rare exceptions, automakers are famously coy about crash dilemmas. They don’t
    want to answer questions about how their self-driving cars would respond to weird, no-win
    emergencies. This is understandable, since any answer can be criticized—there’s no obvious
    solution to a true dilemma, so why play that losing game?
    But we can divine how an automaker approaches these hypothetical problems, which tell us
    something about the normal cases. We can look at patent filings, actual behavior in related
    situations, and other clues. A recent lawsuit filed against Tesla reveals a critical key to
    understanding how its autopiloted cars would handle the iconic “trolley problem” in ethics.
    (Photo by Araya Diaz/Getty Images for 4moms)
    Applied to robot cars, the trolley problem looks something like this:
    Do you remember that day when you lost your mind? You aimed your car at five random
    people down the road. By the time you realized what you were doing, it was too late to
    brake. Thankfully, your autonomous car saved their lives by grabbing the wheel from you and
    swerving to the right. Too bad for the one unlucky person standing on that path, struck and
    killed by your car. Did your robot car make the right decision?
    Either action here can be defended and no answer will satisfy everyone. By programming the
    car to retake control and swerve, the automaker is trading a big accident for a smaller accident,
    and minimizing harm seems very reasonable; more people get to live. But doing nothing and
    letting the five pedestrians die isn’t totally crazy, either.
    By allowing the driver to continue forward, the automaker might fail to prevent that big accident,
    but it at least has no responsibility for  creating an accident, as it would if it swerved into the
    unlucky person who otherwise would have lived. It may fail to save the five people, but—as
    many ethicists and lawyers agree—there’s a greater duty not to kill.
    31272 PMP Assignment 1
    5
    1. Ok, what does this have to do with Tesla?
    The class-action lawsuit filed in December 2016 was not about trolley problems, but it was
    about Tesla’s decision to  not use its Automatic Emergency Braking (AEB) system when a
    human driver is pressing on the accelerator pedal. This decision was blamed for preventable
    accidents, such as driving into concrete walls. From the lawsuit’s complaint:
    Tesla equips all its Model X vehicles, and has equipped its Model S vehicles since March 2015,
    with Automatic Emergency Braking whereby the vehicle computer will use the forward looking
    camera and the radar sensor to determine the distance from objects in front of the
    vehicle. When a frontal collision is considered unavoidable, Automatic Emergency Braking is
    designed to automatically apply the brakes to reduce the severity of the impact. But Tesla has
    programmed the system to deactivate when it receives instructions from the accelerator pedal
    to drive full speed into a fixed object. Tesla confirmed that when it stated that Automatic
    Emergency Braking will operates only when driving between 5 mph (8 km/h) and 85 mph (140
    km/h) but that the vehicle will not automatically apply the brakes, or will stop applying the
    brakes, “in situations where you are taking action to avoid a potential collision. For example:
    • You turn the steering wheel sharply.
    • You press the accelerator pedal.
    • You press and release the brake pedal.
    • A vehicle, motorcycle, bicycle, or pedestrian, is no longer detected ahead.”
    You can also find these specifications in Tesla’s owner manuals; see page 86 of 187.† What
    they suggest is that Tesla wants to minimize user annoyance from false positives, as well as to
    not second-guess the driver’s actions in the middle of an emergency. This makes sense, if we
    think robots should defer to human judgments.
    But isn’t this the point of autonomous cars in the first place: to take humans out of the equation,
    because we’re such poor drivers?
    Law professor Bryant Walker Smith said, “Am I concerned about self-driving cars? Yes. But I'm
    terrified about today’s drivers.” And Dr. Mark Rosekind, administrator of the US National
    Highway Traffic Safety Administration (NHTSA), compares the 35,000+ road fatalities in the US
    every year to a fully loaded 747 plane crash every week.
    Science-fiction writer Isaac Asimov might also reject Tesla's design, recognizing that we can't
    always privilege human judgment. According to his Laws of Robotics:
    1. A robot may not injure a human being or, through inaction, allow a human being to
    come to harm.
    2. A robot must obey the orders given to it by human beings, except where such orders
    would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with
    the First or Second Law.
    31272 PMP Assignment 1
    6
    In other words, following human orders should take a backseat to preventing harm to people;
    and crashing a robot car into a wall can injure the passengers inside, even if there are no
    people around on the outside.
    There’s a mismatch, then, between the safety case for self-driving vehicles—the primary reason
    why we want them—and Tesla’s design decision to not intervene in a collision that a human is
    actively causing, such as stepping on the accelerator  regardless of who or what is in front of the
    car, apparently.
    To be sure, if the car is in Autopilot or Traffic-Aware Cruise Control mode with no human input, it
    will  (or  should)  automatically  brake  for  pedestrians,  and  the  AEB  system  has
    reportedly saved lives this way already. But this doesn't seem to be the case when the human
    has control of the car, as Tesla suggests in previous comments:
    “AEB does not engage when an alternative collision avoidance strategy (e.g., driver
    steering) remains viable. Instead, when a collision threat is detected, forward collision
    warning alerts the driver to encourage them to take appropriate evasive action.”
    Back to our trolley-type crash dilemma, if that's correct, it means that a Tesla car
    would  not retake control of the wheel and swerve away from a group of people (or even brake),
    if the driver were deliberately driving into them. Again, this isn’t an unreasonable stance, but it
    is at odds with a goal to minimize deaths and injuries—in tension with the company’s frequent
    claims of “safety first.”
    But it’s really a “damned if you do, damned if you don’t” problem. If the car were programmed
    instead to retake control and swerve (as it can do), it’d create considerable legal liability for
    Tesla if this action caused a different accident, even one that’s less serious. This accident
    wouldn’t have occurred if the car did not retake control; so the company seems causally
    responsible for it, at least.
    And Tesla would have some explaining to do: why would it override human judgment if a
    pedestrian were in front of the car, but not if a concrete wall were in front of it? Is the driver’s
    safety less important here, because the pedestrian is more vulnerable to injuries?
    Such a design principle isn’t unreasonable either, but it raises further questions about whether a
    car might ever sacrifice its owner’s safety. That's to say, how exactly is Tesla thinking about the
    value of different lives: are they all worth the same, or are some prioritized more than
    others? The answer is not publicly known.
    2. Tesla’s reply to the lawsuit
    In its defense, Tesla responded this way to the lawsuit:
    “Tesla did not do what no manufacturer has ever done—“develop and implement
    computer algorithms that would eliminate the danger of full throttle acceleration into fixed
    objects”  even if it is caused by human error …Tesla disputes that there is a legal duty to
    design a failsafe car.”
    31272 PMP Assignment 1
    7
    First, let's note that Tesla already and often does “what no manufacturer has ever done.” The
    company is clearly an innovator with many industry-first achievements; thus, a lack of precedent
    can’t be the real reason for its decision to not activate AEB in some situations.
    Second, it’s unclear what a “failsafe” car means. If it means a “perfect” car that can avoid all
    accidents, sure, that’s an impossible standard that no automaker can meet. But if it means
    “overriding harmful human actions”, then that’s debatable, and we’ll have to see how the lawsuit
    plays out. The ethics argument might go something like this:
    If you have the capacity to prevent something bad from happening—such as an intentional car
    crash with a wall or people—and you can do this without sacrificing anything important, then
    you arguably have a moral obligation to intervene. You may also have a legal obligation, such
    as if you were in a jurisdiction with Good Samaritan laws that require you to rescue or provide
    aid in perilous situations.
    And Tesla has the capacity to intervene. With its AEB and advanced sensors that can detect
    objects, motorists, and humans, it has the capacity to detect and avoid collisions under normal
    circumstances. As any comic book fan can tell you, with great power comes great
    responsibility. Superman may be faulted for not stopping a bullet, but you can't be; and robot
    cars are superheroes, compared to ordinary cars.
    So, if an automated car had to decide between overriding its driver’s actions and crashing into a
    concrete wall, then this argument suggests that it can and should choose the former. If
    anything is lost or sacrificed, it’s merely the driver’s autonomy, assuming that action was even
    intentional; and driving into one’s own house is usually unintentional.
    But if the decision were to override its driver’s actions and foreseeably run over one
    person  or crash into a crowd of people, then it gets tricky. The car could prevent this mass-
    murder, but not without sacrificing anything important (the one person). In this no-win scenario,
    either choice could be defended, and none will satisfy everyone.
    3. And the point is…?
    This article isn't really about Tesla but about the larger autonomous driving industry. Tesla just
    happens to be the highest profile company here with production cars on the road, so there's
    more public information about them than others. The other automakers also face similar design
    choices and therefore dilemmas.
    The point here is  not that we should halt work on automated cars before solving oddball ethical
    dilemmas. But it’s to recognize that, by replacing the human driver with its AI driver,
    automakers are taking on a lot of moral and legal responsibility it never had before, just like
    someone who gained superpowers. The human driver used to be liable for accidents caused
    by her or his decisions; now that liability is shifting to the AI driver that's capable of saving lives.
    If we think that this industry is as important as advertised, then it needs to be able to defend its
    design decisions in advance, not after the fact of an accident. Even if there are no perfect
    31272 PMP Assignment 1
    8
    answers, had these issues been publicly explained earlier—that it's a feature, not a bug, that
    AEB doesn't second-guess human decisions, and here’s why—the lawsuit and its publicity may
    have been avoided.
    As explained earlier this week, the fake set-up of these hypothetical crash dilemmas doesn’t
    matter, just like it doesn't matter that most science experiments would never occur in
    nature. The insights they generate still tell us a lot about everyday decisions by the automated
    car, such as how much priority it should give to its occupants versus other road-users. This
    priority determines how much room the car gives to passing trucks, bicyclists, and pedestrians,
    among other safety-critical decisions—such as whether it's ok to crash into walls.
    Robot cars are coming our way, and that's fine if they can save lives, reduce emissions, and live
    up to other promises. But, for everyone’s sake, the industry needs to be more transparent
    about its design principles, so that we can better understand the risks and limits of the new
    technologies. NHTSA is already steering automakers in this direction by requesting safety
    assessment letters, which asks about ethical considerations and other things.
    That isn't a lot to ask, since we’re sharing the road with these new and unfamiliar machines,
    entrusted with the lives of family and friends. Otherwise, as they say, expectations are just
    premeditated resentments; and if industry doesn't properly set expectations by explaining key
    design choices, they're on the path toward resentment or worse.
    ~~~
    Acknowledgements: This work is supported by the US National Science Foundation, Stanford CARS, and California
    Polytechnic State University, San Luis Obispo. Any opinions, findings, and conclusions or recommendations expressed
    in this material are those of the author and do not necessarily reflect the views of the aforementioned organizations.
    ~~~
    † Technical note:
    Each one of Tesla's examples seems to be a sufficient, but not necessary, condition for AEB to deactivate. For
    example, several reports filed with NHTSA describe Tesla crashes into parked vehicles; if they're not cases of
    malfunction, they show that AEB can deactivate  even if  a vehicle or other listed object is detected ahead. User
    tests also suggest AEB will not prevent collisions with pedestrians in some circumstances; again, that's either a
    malfunction or by design.
    Autonomous driving engineers I've contacted believe that AEB's deference to human control, even with a pedestrian in
    the way,  is by design for the reasons discussed in this article. Other possible reasons include not wanting to trap a
    driver in an unsafe state, if AEB reacted to false positives (which have occurred) and didn't allow for manual override.
    Without more elaboration from Tesla details of when AEB is supposed to work are still a bit hazy even confusing its own
    customers. It could be that AEB initially activates even if the human is in full control but AEB quickly backs off - that is, it
    “will stop applying the brakes” - if the driver keeps a foot on the accelerator; and this wouldn't change analysis above.
    Tesla has not yet responded to my request for clarification, and I will update this article if they do. But this may not
    matter: even if the AEB system is updated to intervene or second-guess a driver's action, ethical questions still arise,
    such as how it prioritizes the value of different lives, including that of its customers. Again, there's no obvious way to go
    here, and either choice will need defense; and the need for this technical note underscores the need for more
    communication by automakers, to reduce the speculation that naturally fills an information void.
    ………………………………………………………………………………………………… …………….
    31272 PMP Assignment 1
    9
    Example 2
    Our driverless dilemma: When should your car be willing to kill
    you? Navigating the system
    https://projects.iq.harvard.edu/files/mcl/files/greene-driverless-dilemma-sci16.pdf
    By Joshua D. Greene (Science, 23 June 2016)
    Suppose that a driverless car is headed toward five pedestrians. It can stay on course and kill
    them or swerve into a concrete wall, killing its passenger. On page 1573 of this issue, Bonnefon
    et al. (1) explore this social dilemma in a series of clever survey experiments. They show that
    people generally approve of cars programmed to minimize the total amount of harm, even at the
    expense of their passengers, but are not enthusiastic about riding in such “utilitarian” cars—that
    is, autonomous vehicles that are, in certain emergency situations, programmed to sacrifice their
    passengers for the greater good. Such dilemmas may arise infrequently, but once millions of
    autonomous vehicles are on the road, the improbable becomes probable, perhaps even
    inevitable. And even if such cases never arise, autonomous vehicles must be programmed to
    handle them. How should they be programmed? And who should decide?
    Bonnefon et al. explore many interesting variations, such as how attitudes change when a
    family member is on board or when the number of lives to be saved by swerving gets larger. As
    one might expect, people are even less comfortable with utilitarian sacrifices when family
    members are on board and somewhat more comfortable when sacrificial swerves save larger
    numbers of lives. But across all of these variations, the social dilemma remains robust. A major
    determinant of people’s attitudes toward utilitarian cars is whether the question is about
    utilitarian cars in general or about riding in them oneself.
    In light of this consistent finding, the authors consider policy strategies and pitfalls. They note
    that the best strategy for utilitarian policy-makers may, ironically, be to give up on utilitarian cars.
    Autonomous vehicles are expected to greatly reduce road fatalities (2). If that proves true, and if
    utilitarian cars are unpopular, then pushing for utilitarian cars may backfire by delaying the
    adoption of generally safer autonomous vehicles.

    31272 Project Management and the Professional UTS 代写
    As the authors acknowledge, attitudes toward utilitarian cars may change as nations and
    communities experiment with different policies. People may get used to utilitarian autonomous
    vehicles, just as some Europeans have grown accustomed to opt-out organ donation programs
    (3) and Australians have grown accustomed to stricter gun laws (4). Likewise, attitudes may
    change as we rethink our transportation systems. Today, cars are beloved personal
    possessions, and the prospect of being killed by one’s own car may feel like a personal betrayal
    to be avoided at all costs. But as autonomous vehicles take off, car ownership may decline as
    people tire of paying to own vehicles that stay parked most of the time (5). The cars of the future
    may be interchangeable units within vast transportation systems, like the cars of today’s subway
    trains. As our thinking shifts from personal vehicles to transportation systems, people might
    prefer systems that maximize overall safety.
    31272 PMP Assignment 1
    10
    In their experiments, Bonnefon et al. assume that the autonomous vehicles’ emergency
    algorithms are known and that their expected consequences are transparent. This need not be
    the case. In fact, the most pressing issue we face with respect to autonomous vehicle ethics
    may be transparency. Life-and-death trade-offs are unpleasant, and no matter which ethical
    principles autonomous vehicles adopt, they will be open to compelling criticisms, giving
    manufacturers little incentive to publicize their operating principles. Manufacturers of utilitarian
    cars will be criticized for their willingness to kill their own passengers. Manufacturers of cars that
    privilege their own passengers will be criticized for devaluing the lives of others and their
    willingness to cause additional deaths. Tasked with satisfying the demands of a morally
    ambivalent public, the makers and regulators of autonomous vehicles will find themselves in a
    tight spot.
    Software engineers—unlike politicians, philosophers, and opinionated uncles— don’t have the
    luxury of vague abstraction. They can’t implore their machines to respect people’s rights, to be
    virtuous, or to seek justice—at least not until we have moral theories or training criteria
    sufficiently precise to determine exactly which rights people have, what virtue requires, and
    which tradeoffs are just. We can program autonomous vehicles to minimize harm, but that,
    apparently, is not something with which we are entirely comfortable.
    Bonnefon et al. show us, in yet another way, how hard it will be to design autonomous
    machines that comport with our moral sensibilities (6–8). The problem, it seems, is more
    philosophical than technical. Before we can put our values into machines, we have to figure out
    how to make our values clear and consistent. For 21st-century moral philosophers, this may be
    where the rubber meets the road.
    REFERENCES
    1. J.-F. Bonnefon et al.,  Science 352, 1573 (2016).
    2. P. Gao,R. Hensley,A.Zielke,  A Road Map to the Future for the Auto Industry  (McKinsey
    &Co.,Washington, DC, 2014).
    3. E.J.Johnson, D.G. Goldstein, S cience 302, 1338 (2003).
    4. S. Chapman et al.,  Injury Prev .12, 365 (2006).
    5. D. Neil ,“Could self-driving cars spell the end of car ownership?” , Wall Street Journal, 1 December 2015;
    www.wsj.com/articles/could-self-driving-cars-spell-the-end-ofownership-1448986572.
    6. I. Asimov,  I, Robot [stories] (Gnome, NewYork,1950).
    7. W.Wallach,C. Allen,  Moral Machines: Teaching Robots Right from Wrong (Oxford Univ. Press, 2010).
    8. P.Lin,K. Abney,G. A.Bekey,  Robot Ethics:The Ethical and Social Implications of Robotics  (MITPress,
    2011).
    31272 Project Management and the Professional UTS 代写