Generative models, like Generative Adversarial Networks (GAN), are a rapidly advancing area of research for computer science and machine intelligence nowadays. It’s hard to keep track of them all, not to mention the incredibly creative ways in which researchers have achieved and been working on.
The following figures demonstrate some results of the current works ( Images from https://blog.openai.com/generative-models/).
As a Ph.D. student in CS, I think it is necessary to understand the basic pros and cons about it, and it may be very helpful to your own research. I have not fully reviewed the theory and papers, but after skimmed a few papers, I got the impression that the training process of GAN models is very tricky as well as any neural networks model. Thus, there must be a huge improving space for people to make.
Thanks to the internet! There are papers and codes everywhere and nobody will be left behind in these days unless he/she wants to. So working hard and to be a better man (or women or anything good for humanity), cheers!
Here are some papers and blogs that summarized the literature very well.
Abstruct: Autonomous vehicles (AVs) should reduce traffic accidents, but they will sometimes have to choose between two evils, such as running over pedestrians or sacrificing themselves and their passenger to save the pedestrians. Defining the algorithms that will help AVs make these moral decisions is a formidable challenge …
It has been well-known that the autonomous vehicles (AVs) will change the world in the future. The AVs have the potential to benefit the world by increasing the traffic efficiency, reducing pollution and eliminating up to 90% traffic accidents.
The problem is that not all the crashes could be avoided, some crashes will require the AVs to make difficult ethical decisions in cases that involve unavoidable harm.
In the following figure, we see three scenarios just like what we worried.
The AV may avoid harming several pedestrians by swerving and sacrificing a passerby (Fig1A), or the AV may be faced with the choice of sacrificing its own passenger to save one or more pedestrians (Fig1BC).
Even these scenarios may never arise, the AV programming must still include decision rules about what to do in such hypothetical situation.
Thus, the algorithm that controls the AV needs to embed moral principles guiding their decisions in situations of unavoidable harm.
Manufacturers and regulators will need to accomplish three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers.