Phoenix Wright Blip and mouth animation library

A place for Ren'Py tutorials and reusable Ren'Py code.
Forum rules
Do not post questions here!

This forum is for example code you want to show other people. Ren'Py questions should be asked in the Ren'Py Questions and Announcements forum.
Post Reply
Message
Author
trajano
Regular
Posts: 60
Joined: Sun Jun 16, 2019 7:59 pm
Github: trajano
Contact:

Phoenix Wright Blip and mouth animation library

#1 Post by trajano »

This is my rendition of the Phoenix Wright Text Blip effect. Unlike the typical ones that get triggered by the character events, this works based on the text CPS (not just the setting only) and text so it acts more like the Phoenix Wright games. I have recently made a few changes on how it works internally to get better results.

As Ren'Py at the moment does I/O for every file that is being played, the sync gets a bit out of whack. So what I have done was assembled the WAV file to work around https://github.com/renpy/renpy/issues/1 ... -509771817 the wave file is stored a simple cache that is in memory using io.BytesIO to get the bytes and the low level renpysound.play is called to play the audio as the higher level methods only support file names. Since it does not need to write to any temporary stores, this will work in Android as well.

Note this will require the wave and chunk modules that are provided by the standard python library which you need to copy over as per https://github.com/renpy/renpy/issues/1 ... -511655253 you can use my sample test project to copy the files into your python-packages folder I just copied them from the python sources.

Though it does play back on Android, if you're doing a lot of animations while this is occurring, most likely your Android device will not be fast enough.

This only works with stereo wav files because of https://github.com/renpy/renpy/issues/1943 and because the wave libraries can be easily added.

Code: Select all

init -1 python in speakers:
    from renpy.text.textsupport import TAG, TEXT
    import renpy.text.textsupport as textsupport
    import os.path
    import io
    import re
    import wave

    speakers = set()

    """
    The sound file containing the blip sound.
    """
    blip_sound = "audio/sfx-blipfemale.stereo.wav"

    blipwave = wave.open(renpy.file(blip_sound))
    blip_framerate = blipwave.getframerate()
    blip_frame_length = blipwave.getnframes()
    blip_channels = blipwave.getnchannels()
    blip_sample_width = blipwave.getsampwidth()
    blip_frames = blipwave.readframes(blipwave.getnframes())
    blipwave.close()

    """
    The length of the blip
    """
    blip_length = 1.0 * blip_frame_length / blip_framerate

    """
    Blip cache.  This is not an LRU, and data here will be removed non-deterministically
    """
    blip_cache = {}

    """
    Blip cache limit.  If it reaches this size it removes items from the cache.
    """
    blip_cache_limit = 20

    def Character(name, image=None, **kwargs):
        if image == None:
            image = name.lower()
        sound_channel_number = renpy.audio.audio.get_channel("sound").number

        def character_callback(character):
            global speakers
            def the_callback(event, interact=True, **kwargs):
                if not interact:
                    return
                if event == "show":
                    speakers.add(character)
                elif event == "show_done":
                    pass
                elif event == "slow_done":
                    speakers.discard(character)
                    renpy.sound.stop()
                    renpy.restart_interaction()
            return the_callback

        def queue_blips(who, what, cps):
            """
            Queue the blips.  This creates a blip every other character and resets the blip when a comma or space is detected.
            If the CPS is higher frequency than the blip length, it switches to Wendy Oldbag mode where it just beeps as fast
            as it can.
            """

            computed_blip_file = "cache/%d.wav" % ( hash( (who, what, cps) ) )

            def play_to_sound_channel(name):
                renpy.audio.renpysound.play( sound_channel_number, io.BytesIO(blip_cache[name]), name, tight=True, end=-1)

            if name in blip_cache:
                play_to_sound_channel(computed_blip_file)
                return

            tokens = textsupport.tokenize(unicode(what))
            odd = False
            inmemory_wave = io.BytesIO()
            blipout = wave.open(inmemory_wave, "wb")
            blipout.setframerate(blip_framerate)
            blipout.setsampwidth(blip_sample_width)
            blipout.setnchannels(blip_channels)

            cps_stack = []

            def silence(seconds):
                silence_byte_length = int(seconds *  blip_framerate * blip_channels * blip_sample_width)
                if silence_byte_length % (blip_sample_width * blip_channels) != 0:
                    silence_byte_length -= silence_byte_length % (blip_sample_width * blip_channels)
                return b'\0' * silence_byte_length

            def blip(seconds):
                silence_byte_length = ((seconds - blip_length) *  blip_framerate ) * blip_sample_width * blip_channels
                if silence_byte_length % (blip_sample_width * blip_channels) != 0:
                    silence_byte_length -= silence_byte_length % (blip_sample_width * blip_channels)
                return blip_frames + b'\0' * int(silence_byte_length)

            # initial character gap
            # queue.append("<silence %0.2f>" % (1.0/cps))
            blipout.writeframes(silence(1.0/cps))

            for token_type, token_text in tokens:
                if token_type == TEXT:
                    if cps > (1.0 / blip_length):
                        # Wendy Oldbag Speed at this point assume 0.05 seconds and just keep on playing until it reaches the end of the segment.
                        beeps_needed = int(len(token_text) / cps / blip_length) * 2
                        for i in xrange(beeps_needed):
                            # queue.append("<from 0 to %0.3f>%s" % (blip_length, blip_sound))
                            blipout.writeframes(blip_frames)
                    elif cps == 0:
                        pass
                    else:
                        speed = 1.0/cps
                        for letter in token_text:
                            odd = not odd
                            if letter in ', ':
                                # queue.append("<silence %0.3f>" % speed)
                                blipout.writeframes(silence(speed))
                                odd = False
                            else:
                                if odd:
                                    # queue.append("<from 0 to %0.3f>%s" % (speed, blip_sound))
                                    blipout.writeframes(blip(speed))
                                else:
                                    # queue.append("<silence %0.3f>" % speed)
                                    blipout.writeframes(silence(speed))

                if token_type == TAG:
                    match_cps_multiplier = re.match( r'cps=\*([0-9\.]+)', token_text)
                    match_cps = re.match( r'cps=([0-9\.]+)', token_text)
                    match_close_cps = re.match( r'/cps', token_text)
                    if match_cps_multiplier:
                        cps_stack.append(cps)
                        cps *= float(match_cps_multiplier.group(1))
                    elif match_cps:
                        cps_stack.append(cps)
                        cps = float(match_cps.group(1))
                    elif match_close_cps:
                        cps = cps_stack.pop()
                    odd = False
            blipout.close()
            if len(blip_cache) >= blip_cache_limit:
                blip_cache.popitem()
            blip_cache[computed_blip_file] = inmemory_wave.getvalue()
            play_to_sound_channel(computed_blip_file)
            # renpy.sound.play(computed_blip_file)

        def blip_show_function(who, what, **kwargs):
            cps = renpy.game.preferences.text_cps
            if (cps > 0):
                queue_blips(who, what, cps)

            return renpy.character.show_display_say(
                who,
                what,
                **kwargs)

        return renpy.character.Character(name,
            image=image,
            callback=character_callback(image),
            show_function=blip_show_function,
            **kwargs
        )

    def MouthSwitch(character, talking_displayable, quiet_displayable):
        """
        This function creates a ConditionSwitch displayable that switches between talking and quiet characters.
        """
        return renpy.display.layout.ConditionSwitch(
            "speakers.IsSpeaking('%s')" % character, talking_displayable,
            "True",  quiet_displayable
        )

    def IsSpeaking(character):
        """
        This function can be used in a ConditionSwitch to check if a character is speaking or not.
        """
        return character in speakers
It is recommended that you set the CPS to default to 15 and max out at 20 . In screens.rpy you can set the preference bar so the range will be limited like this

Code: Select all

label _("Text Speed")

bar value Preference("text speed", range=20)
textbutton _("Default text speed") action Preference("text speed", value=15)
To use this, create the character using constructor provided.

Code: Select all

define m = speakers.Character("Mia")
Last edited by trajano on Tue Jul 16, 2019 1:19 pm, edited 6 times in total.

User avatar
Westeford
Regular
Posts: 151
Joined: Mon Jun 19, 2017 4:43 pm
Completed: 12 Hours to Die
Projects: Project Premonition
itch: westeford
Location: United States
Contact:

Re: Phoenix Wright Blip and mouth animation library

#2 Post by Westeford »

Looks cool, this is something I've been interested in for a while now, but I really want to see it in action before I try copying it.
Can you provide a video demonstration, or a download of a sample project in a zip folder?

trajano
Regular
Posts: 60
Joined: Sun Jun 16, 2019 7:59 pm
Github: trajano
Contact:

Re: Phoenix Wright Blip and mouth animation library

#3 Post by trajano »

My experiments specifically for the above are here https://github.com/trajano/MyRenPyTest/tree/wright-blip The "wright-blip" branch. I presume you know how to use GitHub.

UPDATE:
Newer branch of code with better results https://github.com/trajano/MyRenPyTest/ ... -blip-wave

User avatar
Westeford
Regular
Posts: 151
Joined: Mon Jun 19, 2017 4:43 pm
Completed: 12 Hours to Die
Projects: Project Premonition
itch: westeford
Location: United States
Contact:

Re: Phoenix Wright Blip and mouth animation library

#4 Post by Westeford »

trajano wrote: Tue Jul 02, 2019 11:49 pm UPDATE:
Newer branch of code with better results https://github.com/trajano/MyRenPyTest/ ... -blip-wave
Thanks for the code samples and for the update too.

I apologize for asking, but is there a way to change voices depending on who's talking?
Also is it possible to put the text blip sounds on the voice channel instead of sound? It'd be very convenient for players to be able to change the voice volumes without affecting the rest of the sounds.

This is some amazing code. Keep up the good work.

trajano
Regular
Posts: 60
Joined: Sun Jun 16, 2019 7:59 pm
Github: trajano
Contact:

Re: Phoenix Wright Blip and mouth animation library

#5 Post by trajano »

I know it can be done for the change who is talking. The second one I am not sure as of yet, when I tried to use "voice" channel it didn't work as I expected (i.e. it didn't work). However, I'm not changing this for a while because I've moved onto something else viewtopic.php?f=52&t=56157

User avatar
Tayruu
Regular
Posts: 141
Joined: Sat Jul 05, 2014 7:57 pm

Re: Phoenix Wright Blip and mouth animation library

#6 Post by Tayruu »

This seems pretty neat, as I would like text blip to pace/change with text speed. However I'm with Pytom's comments in that circumventing the caching seems kinda weird, and I'm not even using wavs in my project, but oggs.

Based on the links, it sounds like you're constructing text-length wavs to create the illusion of timed to the text? This wouldn't work very well if you pause in the middle of it...

trajano
Regular
Posts: 60
Joined: Sun Jun 16, 2019 7:59 pm
Github: trajano
Contact:

Re: Phoenix Wright Blip and mouth animation library

#7 Post by trajano »

I have to check but I recall if you pause it will trigger a new segment I create the blips based on a segment. UPDATE I guess it replays the first part of the segment unfortunately when you pause.

Personally I'd rather not do any circumvention, but limitations of ren'py at the moment. It does an I/O then parse which prevents any semblance of playing the blip audio at a high rate properly.

I can't even access pygame.mixer.Channel to send sound requests there either.

Post Reply

Who is online

Users browsing this forum: No registered users