Potential tech which could generate 2d illustration from 3d assets

A place to discuss things that aren't specific to any one creator or game.
Forum rules
Ren'Py specific questions should be posted in the Ren'Py Questions and Annoucements forum, not here.
Post Reply
Message
Author
conclave
Newbie
Posts: 4
Joined: Sat Dec 16, 2017 5:09 am
Contact:

Potential tech which could generate 2d illustration from 3d assets

#1 Post by conclave »

Edit: Gathered enough interest in other places
Last edited by conclave on Thu Dec 21, 2017 7:36 am, edited 1 time in total.

SundownKid
Lemma-Class Veteran
Posts: 2299
Joined: Mon Feb 06, 2012 9:50 pm
Completed: Icebound, Selenon Rising Ep. 1-2
Projects: Selenon Rising Ep. 3-4
Organization: Fastermind Games
Deviantart: sundownkid
Location: NYC
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#2 Post by SundownKid »

It will probably be a hard sell for most developers. The biggest stumbling block is not the technology itself, but the fact that pre-made assets are generic and lack originality. When you consider that, it actually would cost far more to custom make a 3D model to pose then to commission the art.

Also, visual novel fans tend to not be very adamant about the amount of poses if the art is nice. In addition, having a ton of poses can actually hurt the game by making it harder for the developer to add them all in so it's not necessarily a bonus.

User avatar
papillon
Arbiter of the Internets
Posts: 4107
Joined: Tue Aug 26, 2003 4:37 am
Completed: lots; see website!
Projects: something mysterious involving yuri, usually
Organization: Hanako Games
Tumblr: hanakogames
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#3 Post by papillon »

Learning to use a 3d program in order to create those many poses and expressions still either takes time or requires hiring an artist, and using simple 3d assets for characters risks them looking too generic and losing the appeal factor necessary to make the game stand out. Especially if everyone's doing it :)

On the other hand, good 3d rendering tools that can produce the right kind of good-looking 2d results are very helpful for background artists.

User avatar
LateWhiteRabbit
Eileen-Class Veteran
Posts: 1867
Joined: Sat Jan 19, 2008 2:47 pm
Projects: The Space Between
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#4 Post by LateWhiteRabbit »

conclave wrote: Sat Dec 16, 2017 7:15 am This deep learning expert created an algorithm which takes 3d images as inputs and generates a 2d illustration indistinguishable from the work of human artist.

Main selling point of this tech is that there is NO human intervention during this generation process. It is a simple click and finish process instead of having to be an illustration expert and needing to fix imperfections here and there throughout the process.
---
My question is, how useful does this tech seem to you? If so, I would like to discuss particular features which would make it useful to you.

I feel this could potentially save a lot of art budget for developers by buying pre-made assets on a hobbyist budget which can be customised. For example, poser software like Daz3d allows you to change pose and facial expression of a character by simplifying adjusting sliders. There's no need to pay an artist for multiple drawings of sad/happy faces in different poses.
Well, it may look indistinguishable from the work of a human artist to you, but those of us that have studied illustration, that work in it, it is obviously the work of a computer filter on a 3D model. Similar things have been done for a long time, taking photos or 3D models and running them through filters to get the end result. This process may eliminate the human element of having to tweak the results, but it still isn't some giant leap forward in technology.

I worked in a mall 15 years ago that had a photobooth that turned the pictures into "drawings" for people, and while it wasn't as good as this, it still impressed most people.

This sort of thing is easy to do in Photoshop and even free programs like GIMP.

The algorithm makes the classic computer mistake of not knowing when to omit detail, use different line weights, or break lines to do things like avoid tangents. The hyperactive cross-hatching is a DEAD GIVEAWAY that it is a computer filtered drawing and not one done by a person. The result always looks busy and indecisive. Not to mention that you lose any voice or soul an individual or human artist would have given the drawing with their unique style.
SundownKid wrote: Sat Dec 16, 2017 9:37 am It will probably be a hard sell for most developers. The biggest stumbling block is not the technology itself, but the fact that pre-made assets are generic and lack originality. When you consider that, it actually would cost far more to custom make a 3D model to pose then to commission the art.
This is a big one too. Because the algorithm cannot understand framing, posing, focal points, and lights, you have to set up all that stuff beforehand, yourself, to get a good result.

When you factor in that you'd need to purchase, make, or download 3D assets and the computer programs to pose and light them to even get to the step where the algorithm can do it's job, it just compounds the difficulty and cost of everything.

I feel like 'deep learning' is just becoming a buzzword. The results of this algorithm are just a little better than the dozens of Android apps that will turn your pictures into sketches.

User avatar
axurias
Newbie
Posts: 8
Joined: Wed Jun 21, 2017 11:35 am
Deviantart: axurias
Github: https://github.com/axurias
Skype: axurias03@yahoo.com
Location: Indonesia
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#5 Post by axurias »

deppending on what kind of style that you wanted to reach.

some things might be better achieved by using 2D software only, such as hand-painted drawings or so

some other way is by making a 3D model, then use a "shader", either in your game engine or in your 3D software (such as blender) example : https://i.ytimg.com/vi/KSlud8DFXEk/hqdefault.jpg
(credits to the creator of the 3D, I got it from a youtube channel called daniel kreuter)

or here's a way that some artist use as a pipeline, first they make a base 3D model, then they paint-over it on 2D software (such as photoshop)

so the answer to your question is, 3D software is powerful, BUT it depends on what kind of art-style that you want to achieve, but for me personally, 3D software in my case is good for environmental (background only), as for making a character, 2D software is enough, especially you want to make it a VN-feeling, whereas using 3D model for a character is kind of a lot of effort (at least more than just image editing in 2D softwares).

cheers :)

User avatar
Donmai
Eileen-Class Veteran
Posts: 1958
Joined: Sun Jun 10, 2012 1:45 am
Completed: Toire No Hanako, Li'l Red [NaNoRenO 2013], The One in LOVE [NaNoRenO 2014], Running Blade [NaNoRenO 2016], The Other Question, To The Girl With Sunflowers
Projects: Slumberland
Location: Brazil
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#6 Post by Donmai »

Hi, conclave. Well, being someone who's been here in this forum for five years, creating some narratives which make use of computer-generated art, I can only imply that your enthusiasm towards that "algorithm" comes from the fact that you are not an illustrator, is it true?
I have to agree that computer-generated art will never look as handmade art, sometimes even if a human performs some retouching. In the example image you showed, I could easily tell it was a 3d render because of the way the mermaid's body bends (frankly that was awful) and the static feel of the whole scene.
I remember an old joke that someone would always bring back some years ago in the DAZ Forums everytime a new version of DAZ Studio was released: "Hey where's the Make Art button"? Of course, there's no such thing. Judging by your enthusiastic description, you are believing you have found that "make art" button. But no, 3D software and computers are only tools, they can help us to some extent but they will never make artistic work on their own. And sorry, you will still need an illustrator around. I'm just lucky I'm one myself.
Image
No, sorry! You must be mistaking me for someone else.
TOIRE NO HANAKO (A Story About Fear)

conclave
Newbie
Posts: 4
Joined: Sat Dec 16, 2017 5:09 am
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#7 Post by conclave »

SundownKid wrote: Sat Dec 16, 2017 9:37 am It will probably be a hard sell for most developers. The biggest stumbling block is not the technology itself, but the fact that pre-made assets are generic and lack originality. When you consider that, it actually would cost far more to custom make a 3D model to pose then to commission the art.
I disagree about them looking 'generic'. There are hundreds of customizable parts for a 3d character using Daz3d. And you don't need to be super technical since it's just adjusting sliders for eye size,etc here and there. You can customize a character to become many different variations of hulk if that's what you want.
LateWhiteRabbit wrote: Sat Dec 16, 2017 11:53 am The algorithm makes the classic computer mistake of not knowing when to omit detail, use different line weights, or break lines to do things like avoid tangents. The hyperactive cross-hatching is a DEAD GIVEAWAY that it is a computer filtered drawing and not one done by a person. The result always looks busy and indecisive. Not to mention that you lose any voice or soul an individual or human artist would have given the drawing with their unique style.
The screenshots I showed were meant to be a proof of concept. Just cause there were some artifacts here and there doesn't mean it's impossible to get rid of them.

The algorithm could be extended to be capable of generating something like this:

Image

Also, here's a colored illustration example generated from the algorithm which I challenge you to point flaws at:

https://www.daz3d.com/forums/uploads/Fi ... 13acb9.png
LateWhiteRabbit wrote: Sat Dec 16, 2017 11:53 am This is a big one too. Because the algorithm cannot understand framing, posing, focal points, and lights, you have to set up all that stuff beforehand, yourself, to get a good result.
Not sure if you meant that you don't have to do this work with traditional approach because you just stick a 2d figure on a background. The algorithm will allow you to do the same.

If you want to place 2d characters in background with correct perspective and such, you still have to think about them even if you don't use this algorithm. Also, one purpose of the algorithm is to reduce amount of attention required for details compared to 3d renderings so that focal points, lights and such don't have to be setup perfectly.
The details mentioned are good to keep in mind but not relevant to the usefulness of the algorithm since it isn't supposed to be a director, just an illustrator. As long as an illustrator conveys the lightings, etc that it was told to draw, it's doing its job.

User avatar
fleet
Eileen-Class Veteran
Posts: 1571
Joined: Fri Jan 28, 2011 2:25 pm
Deviantart: fleetp
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#8 Post by fleet »

@conclave,
I've sent you a PM.
Very respectfully,
fleet
Some of my visual novels are at http://www.the-new-lagoon.com. They are NSFW
Poorly done hand-drawn art is still poorly done art. Be a Poser (or better yet, use DAZ Studio 3D) - dare to be different.

User avatar
LateWhiteRabbit
Eileen-Class Veteran
Posts: 1867
Joined: Sat Jan 19, 2008 2:47 pm
Projects: The Space Between
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#9 Post by LateWhiteRabbit »

conclave wrote: Sat Dec 16, 2017 8:38 pm I disagree about them looking 'generic'. There are hundreds of customizable parts for a 3d character using Daz3d. And you don't need to be super technical since it's just adjusting sliders for eye size,etc here and there. You can customize a character to become many different variations of hulk if that's what you want.
Even customized characters from Daz3D often end up looking 'generic' because they are in the same style. I've used Daz3D extensively in the past as well as made 3D models from scratch. (I have a bachelor's degree in Graphic Art and Design, with a focus on interactive mediums.)

What if someone wants an moe anime style character? That is about a thousand times harder to model in 3D for good render, because a good cartoon render of such a model requires ignoring edges and knowing when to flatten and omit lighting detail, which ironically often involves artists hand-painting normal maps for the cartoon renders to look good.

What if you want something like Skottie Young's illustrations?

That's what we mean when we talk about how it is going to be hard to come up with different 3D models to render good results.
conclave wrote: Sat Dec 16, 2017 8:38 pm The screenshots I showed were meant to be a proof of concept. Just cause there were some artifacts here and there doesn't mean it's impossible to get rid of them.

Also, here's a colored illustration example generated from the algorithm which I challenge you to point flaws at:

https://www.daz3d.com/forums/uploads/Fi ... 13acb9.png
It isn't about 'artifacts' it's that the algorithm isn't making the same decisions a professional human artist would. Maybe it could be trained to make those decisions. And I'm not saying it isn't getting some nice results. It totally is. My whole issue is the hyperbole of claiming it can't be distinguished from a human artist.

conclave wrote: Sat Dec 16, 2017 8:38 pm Not sure if you meant that you don't have to do this work with traditional approach because you just stick a 2d figure on a background. The algorithm will allow you to do the same.

If you want to place 2d characters in background with correct perspective and such, you still have to think about them even if you don't use this algorithm. Also, one purpose of the algorithm is to reduce amount of attention required for details compared to 3d renderings so that focal points, lights and such don't have to be setup perfectly.
The details mentioned are good to keep in mind but not relevant to the usefulness of the algorithm since it isn't supposed to be a director, just an illustrator. As long as an illustrator conveys the lightings, etc that it was told to draw, it's doing its job.
I mention those things, because every algorithm is only as good as it's inputs. If you don't feed the algorithm a good model, with good lighting, I doubt you are going to get any kind of good results.

And I have to ask:
Why haven't you mentioned the name or linked to information on this algorithm? Is it only being run on a university mainframe or research super computer? Why are all the examples almost 2 years old? This discussion is all purely rhetorical without that information.

User avatar
Kinjo
Veteran
Posts: 219
Joined: Mon Sep 19, 2011 6:48 pm
Completed: When the Seacats Cry
Projects: Detective Butler
Organization: Goldbar Games
Tumblr: kinjo-goldbar
Deviantart: Kinjo-Goldbar
Github: GoldbarGames
Skype: Kinjo Goldbar
itch: goldbargames
Location: /seacats/
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#10 Post by Kinjo »

I was also curious about the origin of the algorithm, so I did some digging and found it. Personally, I think there are some more compelling examples there compared to what you posted here, and I can see why you would think it's useful for visual novels.

Although it's not a magic button (it's just a shader applied to a render), I don't think it should be dismissed so quickly. Some people might find it useful, like those who are better at 3D sculpting than hand-drawing. They might be able to make artwork for their game a lot faster.

However, the bottom line is this: you still need an artist to do the 3D modelling. The reason those renders came out so well is because the models were so intricately designed and composed. They probably have thousands, if not hundreds of thousands of vertices and edges, and rigging the models to animate properly is no easy feat either. So you could definitely get a variety of poses for characters out of it, and even a variety of CGs taking place in the same scene. But if you're going to the effort of creating 3D assets for a 2D game, why not just make a 3D game instead?

User avatar
Zylinder
Veteran
Posts: 320
Joined: Sat May 07, 2011 4:30 am
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#11 Post by Zylinder »

conclave wrote: Sat Dec 16, 2017 8:38 pm
The algorithm could be extended to be capable of generating something like this:

Image
When I reverse Google searched that it gave me The House of Fata Morgana, and the game has quite an erratic art styles. Can you submit links to a workflow or an article discussing the workflow to create this piece in particular? Because it's the only one I've seen thus far that can convince me it's an illustration, if it's indeed a render.

As for your submission of the Millennium Falcon, I think what people are saying is that without a good design, the render is not going to be appealing. The Millennium Falcon looks good because it was well-designed. The majority of the work submitted on the thread using generic 3D renders looks extremely amatuerish, lacking the charm of even an amatuer 2D artist because they all look like filtered 3D models in its present state.

2D artists can churn out illustrative work at x3 the speed of modellers of equal skill. With present (affordable) technology, there's no way a modeller can design, model, rig, light, and render as fast as an illustrator of equal calibre.

conclave
Newbie
Posts: 4
Joined: Sat Dec 16, 2017 5:09 am
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#12 Post by conclave »

LateWhiteRabbit wrote: Sun Dec 17, 2017 6:39 pm What if someone wants an moe anime style character? That is about a thousand times harder to model in 3D for good render, because a good cartoon render of such a model requires ignoring edges and knowing when to flatten and omit lighting detail, which ironically often involves artists hand-painting normal maps for the cartoon renders to look good.

What if you want something like Skottie Young's illustrations?
I guess I need to make a distinction here.
There are certainly limitations with replicating certain styles like 'moe' and the link you have given. The approach required for them is totally different to the one I'm proposing. I found that there are many shader techniques already available for generating 'moe' image from 3d and I'm not really interested in the art style. I suppose you don't need to & you should not use my tech for 'moe' style art and such.

The art style I'm trying to replicate using this tech would be semi-realistic illustrations as opposed to 'freestyle' illustrations where the drawing wildly disobey perspectives and other rules. For example, the 'House in Fata Morgana' screenshot I provided is something that I definitely want to replicate with this tech.

Why haven't you mentioned the name or linked to information on this algorithm? Is it only being run on a university mainframe or research super computer? Why are all the examples almost 2 years old? This discussion is all purely rhetorical without that information.
It wasn't my intention to not include the source, there were other important details I wanted to convey and this wasn't my top priority. Also no one asked about it until now. He posts new renders sometimes but they are not archived in a nice organised way for people to just skim through.

I talked to the expert and unfortunately, he designed the tech in such a way that it's not publicly usable because the codes weren't designed with that in mind and it would cost lots of man hours to fix that. The fact that it's not publicly available is also why I'm looking into making my own. If you want to know more details about the existing technology I mentioned, I made heaps of notes from the expert and I can discuss this through IM like Discord instead of having 12 hour delay back and forth conversations.
Kinjo wrote: Sun Dec 17, 2017 9:02 pm However, the bottom line is this: you still need an artist to do the 3D modelling. The reason those renders came out so well is because the models were so intricately designed and composed. They probably have thousands, if not hundreds of thousands of vertices and edges, and rigging the models to animate properly is no easy feat either. So you could definitely get a variety of poses for characters out of it, and even a variety of CGs taking place in the same scene. But if you're going to the effort of creating 3D assets for a 2D game, why not just make a 3D game instead?
As I mentioned before, while 2d assets are easier to create from scratch, there are hardly any pre-made 2d assets which are reusable. For 3d, the reverse is true. They are more difficult to create from scratch, but there are LOTS of 3d pre-made assets which are reusable AND customisable (also mentioned before).
Zylinder wrote: Sun Dec 17, 2017 11:56 pm When I reverse Google searched that it gave me The House of Fata Morgana, and the game has quite an erratic art styles. Can you submit links to a workflow or an article discussing the workflow to create this piece in particular? Because it's the only one I've seen thus far that can convince me it's an illustration, if it's indeed a render.
It's not a render. I was trying to say deep learning technology is quite capable and the possibilities aren't limited like some people like to think.

I'm new to the tech as well. If you wanted to research on how to go about rendering this. I found a good resource for this sort of rendering: http://blendernpr.org/

ThisIsNoName
Veteran
Posts: 311
Joined: Fri Feb 10, 2012 10:15 pm
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#13 Post by ThisIsNoName »

While I haven't used it myself, you might be interested in StyLit. The biggest downside seems to be that you can only shade in one style. It sounds like it's not exactly what you're thinking of, but it can get you similar results.

User avatar
Kinjo
Veteran
Posts: 219
Joined: Mon Sep 19, 2011 6:48 pm
Completed: When the Seacats Cry
Projects: Detective Butler
Organization: Goldbar Games
Tumblr: kinjo-goldbar
Deviantart: Kinjo-Goldbar
Github: GoldbarGames
Skype: Kinjo Goldbar
itch: goldbargames
Location: /seacats/
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#14 Post by Kinjo »

conclave wrote: Mon Dec 18, 2017 7:51 am As I mentioned before, while 2d assets are easier to create from scratch, there are hardly any pre-made 2d assets which are reusable. For 3d, the reverse is true. They are more difficult to create from scratch, but there are LOTS of 3d pre-made assets which are reusable AND customisable (also mentioned before).
Ah, yeah I'd forgotten you did say that, whoops. Though in that case, I still think that your greatest obstacle would be trying to find good assets. Yes, there's a ton of pre-made 3D stuff available, but is it going to look good? And is it going to look unique? I've used the 3D Warehouse for free models, and although there is indeed a large selection, there's also a large number of models you'll never need or wouldn't want to ever use. Maybe DAZ has a better collection, but I'm just warning you that if you really want your models to stand out, you might want to consider making your own. Otherwise, whether this tool saves money really just depends on how much the models cost and what kind of quality you're willing to settle for.

Also, about the technology itself... Assuming it really is a deep learning algorithm, then you can just train it on data sets closer to the style you want it to create. It learns just like a person would learn how to draw, so even if a professional artist can tell it's "machine-made" there could very well come a point in the future where that's no longer true. Plus, as someone who isn't a professional artist, it certainly fooled me, and really the average player (who can't tell the difference) won't care how your assets are made -- just as long as they look good.

conclave
Newbie
Posts: 4
Joined: Sat Dec 16, 2017 5:09 am
Contact:

Re: Potential tech which could generate 2d illustration from 3d assets

#15 Post by conclave »

Kinjo wrote: Tue Dec 19, 2017 5:06 amI still think that your greatest obstacle would be trying to find good assets. Yes, there's a ton of pre-made 3D stuff available, but is it going to look good? And is it going to look unique? I've used the 3D Warehouse for free models, and although there is indeed a large selection, there's also a large number of models you'll never need or wouldn't want to ever use. Maybe DAZ has a better collection, but I'm just warning you that if you really want your models to stand out, you might want to consider making your own. Otherwise, whether this tool saves money really just depends on how much the models cost and what kind of quality you're willing to settle for.
The 3d store you linked seems to be on par with Turbosquid. Daz store assets blow both out of water in my opinion. If you check https://www.daz3d.com/explore-sam-kennedy, some professionals already are using Daz assets as a base for their illustrations. So that again speaks volumes for the capabilities of its assets.

Lastly I played enough adult renpy games with Daz assets to come to the conclusion that there's so much artistic creativity that can be tapped with those assets.

As you pointed out, depending on pre-made assets somewhat constrict your creativity with what's available for sale. (which is still impressive btw) But it seems that hasn't stopped hobbyists from providing their own authentic Daz porn.
Also, about the technology itself... Assuming it really is a deep learning algorithm, then you can just train it on data sets closer to the style you want it to create. It learns just like a person would learn how to draw, so even if a professional artist can tell it's "machine-made" there could very well come a point in the future where that's no longer true. Plus, as someone who isn't a professional artist, it certainly fooled me, and really the average player (who can't tell the difference) won't care how your assets are made -- just as long as they look good.
Thanks for acknowledging this. I talked to other deep learning experts as well. The current era of deep learning is mainly 'first generation' techniques, where you specify training methods for your algorithm. The algorithm doesn't learn new skills besides improving on optimal values and such. The next-gen deep learning is to do with advanced human traits as you described, learning new skills on its own even if we don't specify the training methods.

I think it probably will take me decades to go with next-gen techniques. So I would probably go with first-gen techniques which still are perfectly capable of amazing things.

Post Reply

Who is online

Users browsing this forum: No registered users