A Novel Construction of Constrained Verifiable Random Functions

Constrained veriable random functions (VRFs) were introduced by Fuchsbauer. In a constrained VRF, one can drive a constrained key skS from the master secret key sk, where S is a subset of the domain. Using the constrained key skS, one can compute function values at points which are not in the set S. e security of constrained VRFs requires that the VRFs’ output should be indistinguishable from a random value in the range. ey showed how to construct constrained VRFs for the bit-xing class and the circuit constrained class based on multilinear maps. eir construction can only achieve selective security where an attacker must declare which point he will attack at the beginning of experiment. In this work, we propose a novel construction for constrained veriable random function from bilinear maps and prove that it satises a new security denition which is stronger than the selective security. We call it semiadaptive security where the attacker is allowed to make the evaluation queries before it outputs the challenge point. It can immediately get that if a scheme satised semiadaptive security, and it must satisfy selective security.


Introduction
Pseudorandom functions (PRFs) are one of the basic concepts in modern cryptography, which were introduced by Goldreich et al. [1]. A PRF is an e ciently computable function F : K × X ⟶ Y. For a randomly chosen key sk ∈ K, a polynomial probabilistic time (PPT) adversary cannot distinguish the outputs F(sk, x) of the function for any x ∈ X from a randomly chosen values from Y.
Boneh and Waters [2] put forward the concept of PRFs and presented a new notion which was called constrained pseudorandom functions. A constrained PRF is the same as the standard PRF except that it is associated with a set S ⊂ X. It contains a master key sk ∈ K which can be used to evaluate all points that belonged to the domain X. Given the master key sk ∈ K and a set S ⊂ X, it can generate a constrained key sk S which can be used to evaluate F(sk, x) for any x ∉ S. Pseudorandomness requires that given several constrained keys for sets S 1 , . . . , S q 1 ⊂ X and several function values at points x 1 , . . . , x q 2 ∈ X which were chosen adaptively by the adversary, the adversary cannot distinguish a function value F(sk, x) from a random value for all x ≠ x i , ∀i ∈ 1, . . . , q 2 , and x ∈ ∩ q 1 j 1 S j . Constrained PRFs have been used to optimize the ciphertext length of broadcast encryption [2] and construct multiparty key exchange [3].
Veri able random functions were introduced by Micali et al. [4]. A VRF is similar to a pseudorandom function. It also preserves the pseudorandomness that a PPT adversary cannot distinguish an evaluated value F(sk, x) from a random value even if it is given several values at other points. A VRF has an additional property that the party holding the secret key is allowed to evaluate F on x ∈ X associated with a noninteractive proof. With the proof, anyone can verify the correctness of a given evaluation by the public key. In addition, the evaluation of F(sk, x) should remain pseudorandomness and even an adversary can query values and proofs at other points. Lastly, the veri cation should remain sound even if the public key was computed maliciously. VRFs have been used to construct zero knowledge proofs [5], and electronic payment schemes [6], and so on.
In SCN 2014, Fuchsbauer [7] extended the notion of VRFs to a new notion, which was called constrained VRFs. In addition to three polynomial time algorithms: Setup, Prove, and Verify, they defined another algorithm Constrain, which was used to drive a constrained key. For constrained VRFs, it generates a pair key (pk, sk) in the Setup algorithm. Given a constrained key sk S for a set S ⊂ X, the algorithm Prove computes a value y � F(sk, x) associated with a prove π which can be used to verify the correctness of y � F(sk, x) by the public key pk. A constrained VRF should satisfy the security notions of provability, uniqueness, and pseudorandomness. e pseudorandomness requires that the evaluation of F(sk, x) should be indistinguishable from a random value, even if the adversary is given several constrained keys for subset S 1 , . . . , S q 1 ⊂ X and several function values associated with proofs at points x 1 , . . . , x q 2 , where x ≠ x i , ∀i ∈ [q 2 ], and x ∈ ∩ q 1 j�1 S j . A possible application of constrained VRFs is micropayments [8]. Micropayment schemes emphasized the ability to make payments of small amounts. In the micropayment based on probability, a large number of users and merchants jointly select a user to pay the cheque. It can realize the micropayment of a large number of users to be converted into a macropayment of a certain user with a small probability. In this scheme, how do we decide which cheque C should be payable in fair way? Using the VRFs, merchant M publishes pk M for VRF with range Y ∈ [0, 1]. Cheque C is payable if F(sk M , C) < s, where s is a known selection rate. However, it has a drawback which needs a public key infrastructure (PKI) for merchants' keys pk M . By the constrained VRFs, every merchant uses the same key sk. Merchant M gets constrained key sk M for set (id M , C), where id M is the identity of merchant M. Cheque C is payable if F(sk M , id M ‖C) < s. Anybody can check the result by the same public key pk. erefore, it does not need the PKI for merchants. Fuchsbauer [7] gave two constructions from the multilinear maps based on constrained PRFs proposed by Boneh and Waters [2]. e first one is bit-fixing VRFs, in which the constrained keys can be derived for any set S υ ⊂ 0, 1 { } n , where S υ is described by a vector υ ∈ 0, 1, ⊥ { } as the set of all strings such that it matches υ at all coordinates that are not ⊥. e second one is circuit constrained VRFs, in which the constrained keys can be derived for any set that is decidable by a polynomial size circuit.
However, Fuchsbauer's constructions [7] can only achieve selective security-a weaker notion where the adversary must commit to a challenge point x * at the beginning of the experiment. By the technology of complexity leveraging, any selective security can be converted into adaptive security where the adversary can make its challenge query at any point. e reduction simply guesses beforehand which challenge value the adversary will query. erefore, it leads to a security loss that is exponential in the input length. In this work, we attempt to ask an ambitious question: is it possible to construct a constrained VRF which satisfies a more stronger security compared with the selective security?
In this work, we propose a novel construction based on the bilinear maps. Inspired by the constrained PRFs of Hohenberber et al. [9], we construct a VRF with constrained keys for any sets of polynomial size and define a new security named semiadaptive security. It allows the adversary to query the evaluation oracle before it outputs a challenge point, while the public key is returned to the adversary associated with the challenge evaluation. is definition is stronger than the selective security, which can be verified easily.
Our scheme is derived from the constructions of constrained PRFs given by Hohenberger et al. [9]. It is defined over a bilinear group, which contains three groups G 1 , G 2 , and G T with composite order N � pq, equipped with bilinear maps e : e constrained VRFs map an input from 0, 1 is an admissible hash function. VRFs are defined as associated with a proof where h(x) i is the i ′ th bit of h(x).
In order to verify the correctness of evaluation, we define a public key as pk � (w, w c , iO(C)), where iO(C) is an obfuscation of a circuit which takes a point x as input and outputs an element D(x) ≔ e(v n i�1 d i,h(x) i , w) from G T . e verifier only needs to check e(P(sk, x), w) � D(x) and e(P(sk, x), w c ) � F(sk, x). e constrained key is an obfuscation of a circuit that has the secret key sk and the constrained set S hardwired in it. On input a value x ∉ S, it outputs (F(sk, x), P(sk, x)). While this solution would work only if the obfuscator achieves a black box obfuscation definition [10], there is no reason to believe that an indistinguishability obfuscator would necessarily hide the secret key sk.
We solve this problem by a new technique which was introduced by Hohenberger [9]. We divide the domain into two disjoint sets by the admissible hash function: computable set and challenge set. e proportion of computable set in the domain is about 1 − 1/Q(λ), and the proportion of challenge set in the domain is about 1/Q(λ), where Q(λ) is the number of queries made by the adversary. In the evaluation queries before the adversary outputs the challenge point, we use the secret key sk to answer the evaluation query x and abort the experiment if x belonged to the challenge set. After the adversary outputs a challenge point x * , we use a freshly chosen secret key sk ′ to answer the evaluation queries. Via a hybrid argument, we reduce weak Bilinear Diffie-Hellman Inversion (BDHI) assumption to the pseudorandomness of constrained VRFs. [11] gave a construction of VRFs in bilinear groups, but the size of proofs and keys is linear in input size, which may be undesirable in resource constrained user. Dodis and Yampolskiy [12] gave a simple and efficient construction of VRFs based on bilinear mapping. eir VRFs' proofs and keys have constant size, but it is only suitable for small input spaces. Hohenberger and Waters [13] presented the first VRFs for exponentially large input spaces under a noninteractive assumption. Abdalla et al. [14] showed a relation between VRFs and identitybased key encapsulation mechanisms and proposed a new VRF-suitable identity-based key encapsulation mechanism from the decisional ℓ− weak Bilinear Diffie-Hellman Inversion assumption.

Related Works. Lysyanskaya
Fuchsbauer et al. [15] studied the adaptive security of the GGM construction for constrained PRFs and gave a new reduction that only loses a quasipolynomial factor q O(log λ) , where q is the number of adversary's queries. Hofheinz et al. [16] gave a new constrained PRF construction for circuit that has polynomial reduction to indistinguishability obfuscation in the random oracle model.
Kiayias et al. [17] introduced a novel cryptographic primitive called delegatable pseudorandom function, which enables a proxy to evaluate a pseudorandom function on a strict subset of its domain using a trapdoor derived from the delegatable PRF's secret key. Boyle et al. [18] introduced functional PRFs which can be seen as constrained PRFs. In functional PRFs, in addition to a master secret key, there are other secret keys for a function f, which allows one to evaluate the pseudorandom function on any y for which there exists an x such that f(x) � y. Chandran et al. [19] showed constructions of selectively secure constrained VRFs for the class of all polynomial-sized circuits.

Preliminaries
We first give a definition of admissible hash functions which is introduced by Boneh and Boyen [20].
Next, we present the formal definition of indistinguishability obfuscation following the syntax of Garg et al. [21].
Definition 2 (indistinguishability obfuscation (iO)). A uniform PPT machine iO is called an indistinguishability obfuscator for a circuit class C λ if the following holds: (i) Correctness: for all security parameters λ ∈ N, for all C ∈ C λ , and for all inputs x, we have (ii) Indistinguishability: for any (not necessarily uniform) PPT distinguisher Samp, D, there exists a negligible function negl such that the following 2.1. Assumptions. Let G be a PPT group algorithm that takes a security parameter 1 λ as input and outputs as tuple (N, G p , G q , G 1 , G 2 , G T , e), in which p and q are independent uniform random λ− bit primes. G 1 , G 2 , and G T are groups of order N � pq, e : G 1 × G 2 ⟶ G T is a bilinear map, and G p and G q are the subgroups of G 1 with the order p and q, respectively. e subgroup decision assumption [22] in the bilinear group is said that the uniform distribution on G 1 is computationally indistinguishable from the uniform distribution on a subgroup of G p or G q . Assumption 1 (subgroup hiding for composite order bilinear groups). Let (N, We say that the subgroup decision problem is hard if for all PPT A, Adv SGH A is negligible in λ.
Assumption 2 (weak Bilinear Diffie-Hellman Inversion). Let (N, G p , G q , G 1 , G 2 , G T , e) ⟵ G(1 λ ), g 1 ⟵ G 1 , a ⟵ Z * N , and g 2 ⟵ G 2 , c ⟵ Z * N . Let D � (N, G p , G q , G 1 , G 2 , G T , e, g 1 , g a 1 , . . . , g a n− 1 1 , g 2 , g T ⟵ G T . e advantage of algorithm A in solving the problem is defined as We say that the weak bilinear Diffie-Hellman inversion problem is hard if for all PPT A, Adv BDHI A is negligible in λ. Chase et al. [22] showed that many q− type assumptions can be implied by subgroup hiding in bilinear groups of composite order.

Definition
We recall the definition of constrained VRFs which was given by Fuchsbauer [7].
Let F : K × X ⟶ Y be an efficiently computable function, where K is the key space, X is the input domain, and Y is the range. F is said to be constrained VRFs with regard to a set S ⊂ X if there exists a constrained key space K ′ , a proof space P, and four algorithms(Setup, Constrain, Prove, and Verify) : takes the security parameter λ as input and outputs a pair of keys (pk, sk), a description of the key space K, and a constrained key space K ′ (ii) Constrain(sk, S) ⟶ sk S : this algorithm takes the secret key sk and a set S ⊂ X as input and outputs a constrained key sk S ∈ K ′ (iii) Prove(sk S , x) ⟶ (y, π) or (⊥, ⊥) : this algorithm takes the constrained key sk S and a value x as input and outputs a pair (y, π) ∈ Y × P of a function value and a proof if x ∉ S, else outputs (⊥, ⊥) (iv) Verify(pk, x, y, π) ⟶ 0, 1 { } : this algorithm takes the public key pk, an input x, a function value y, and a proof π as input and outputs a value in 0, 1 { }, where "1" indicates that y � F(sk, x) 3.1. Provability. For all λ ∈ N, (pk, sk) ⟵ Setup(1 λ ), S ⊂ X, sk S ⟵ Constrain(sk, S), x ∈ X, and (y, π) ⟵ Prove(sk S , x), it holds that (i) If x ∉ S, then y � F(sk, x) and Verify(pk, x, y, π) � 1 (ii) If x ∈ S, then (y, π) � (⊥, ⊥)

Pseudorandomness.
We consider the following experiment Exp VRF A (1 λ , b) for λ ∈ N : (i) e challenger first chooses b ⟵ 0, 1 { } and then generates (pk, sk) by running the algorithm Setup(1 λ ) and returns pk to the adversary A (ii) e challenger initializes two sets V and E and sets V ≔ [, E ≔ [, where V will contain the points that the adversary A cannot evaluate and E contains the points at which the adversary queries the evaluation oracle (iii) e adversary A is given the following oracle: and P(sk, x)) and set

Semiadaptive Security.
We give a weak definition for pseudorandomness which is called semiadaptive security. It allows the adversary to query the evaluation before it outputs a challenge point, while the public key is returned to the adversary after the adversary commits a challenge point. In the selective security, the adversary must commit a challenge input at the beginning of the experiment. erefore, we can find that if a scheme satisfies the semiadaptive security, it must satisfy selective security. Conversely, it may be not true.

Puncturable Verifiable Random Functions. Puncturable
VRFs are a special class of constrained VRFs, in which the constrained set contains only one value, i.e., S � x * { }. e properties of provability, uniqueness, and pseudorandomness are similar to the constrained VRFs. To avoid repetition, we omit the formal definitions.

Construction
In this section, we give our construction for puncturable VRFs. A puncturable VRF F : K × X ⟶ Y consists of four algorithms (Setup, Puncture, Prove, and Verify). e input domain is . e key space K and range space Y are defined as a part of the setup algorithm.
is algorithm computes an obfuscation of a circuit C 2 which is defined in Figure 2. Note that C 2 has the secret key sk and the puncturable value key sk x′ is a program that takes an ℓ− bit input x. We define (iv) Verify(pk, x, y, π) ⟶ 0, 1 and output 1 if the following equations are satisfied: erefore, we have Verify(pk, x, y, π) � 1. When x � x ′ , we can get that Prove(sk x′ , x ′ ) � (⊥, ⊥). is completes the proof of provability.

Proof of Pseudorandomness.
In this section, we prove that our construction is secure puncturable VRFs as defined in Section 3.

Theorem 1.
Assuming iO is a secure indistinguishability obfuscator and the subgroup hiding assumption for composite order bilinear groups holds, then our construction described as above satisfies the semiadaptive security as defined in Section 3.
Proof. To prove the above theorem, we first define a sequence of games where the first one is the original pseudorandomness security game and show that each adjacent games is computationally indistinguishable for any PPT adversary A. Without loss of generality, we assume that the adversary A makes Q � Q(λ) evaluation queries before outputting the challenge point, where Q(λ) is a polynomial. We present a full description of each game and underline the changes from the present one to the previous one. Each such game is completely characterized by its key generation algorithm and its challenge answer. e differences between these games are summarized in Table 1.
e first game is the original security for our construction. Here, the challenger first chooses a puncturable VRF key. en, A makes evaluation queries and finally outputs a challenge point. e challenger responds with either a PRF evaluation or a random value.

Security and Communication Networks
(2) e adversary A makes a evaluation query and y 1 ⟵ G T , and returns (pk, sk x * , y α ) to the adversary A.

Game 2.
is game is the same as the Game 1 except that a partitioning game is simulated. If the undesirable partition is queried, we abort the game. e partition game is defined as follows: the challenger samples a string u ∈ 0, { 1, ⊥} n by the algorithm AdmSample of admissible hash function and aborts if either there exists a evaluation query x such that H u (x) � 0 or the challenge query x * such that H u (x * ) � 1.
(2) e adversary A makes a evaluation query If not, the game aborts. Else, the challenger computes e challenger checks if H u (x * ) � 0. If not, the game aborts. Else, the challenger computes sk x * ⟵ iO j , w c ) and y 1 ⟵ G T , and returns (pk, sk x * , y α ) to the adversary A. (5) e adversary A outputs a bit α ′ and wins if α ′ � α.

Lemma 1. For any PPT adversary
Proof.
e difference between Game 1 and Game 2 is that we add an abort condition in Game 2. From the θ− admissible of hash function h, we can get that for all ). e two experiments are equal if Game 2 does not abort. erefore, if A wins with advantage ϵ in Game 1, then it wins with advantage at least ϵ/θ(Q(λ)) in Game 2. □

Game 3.
is game is the same as the previous one except that the public key and the punctured key are obfuscation of two other circuits defined in Figures 3 and 4, respectively. On inputs x such that H u (x) � 1, the public key and the punctured key use the same secret key sk as before. However, if H u (x) � 0, the public key and the punctured key use a different secret key sk ′ which is randomly chosen from the key space. e detailed description is given as follows: (1) e challenger runs (N, 0 , d 1,1 ), . . . , (d n,0 , d n,1 )) and pk � (w, w c , iO (C 1 )). en, the challenger flips a coin α ⟵ 0, 1 { } and runs u ⟵ AdmSample(1 λ , Q).
(2) e adversary A makes a evaluation query If not, the game aborts. Else, the challenger computes Table 1: e differences between each adjacent games.

Game
Key generation Challenge answer Security and Communication Networks , w c ) and y 1 ⟵ G T . en, it returns (pk ′ , sk x * , y α ) to the adversary A. is proof is given in Section 4.3.

Game 4.
is game is the same as the previous one except that the generation way of secret key sk ′ is different. We make some elements of secret key sk ′ to contain a factor a, for use on inputs x where H u (x) � 0. e detailed description is given as follows: (1) e challenger runs (N, G p , G q , G 1 0 , d 1,1 ), . . . , (d n,0 , d n,1 ) ∈ Z 2 N , and sets sk � (v, w, c, (d 1,0 , d 1,1 ), . . . , (d n,0 , d n,1 )) and pk � (w, w c , iO(C 1 )).
(2) e adversary A makes a evaluation query If not, the game aborts. Else, the challenger computes , w c ) and y 1 ⟵ G T . en, it returns (pk ′ , sk x * , y α ) to the adversary A.
Since a ∈ Z N which is invertible with overwhelming probability, e i,b � e i,b ′ · a is a uniform element in Z N . Hence, the two experiments are statistically indistinguishable. □

Game 6.
is game is the same as the previous one except that e(v a n 1 , w c ) is replaced by a random element from G T . Formally, the challenger chooses a random element T ⟵ G T , and uses y 0 � T n j�1 e j,b * j ′ to replace y 0 � e(v a n n j�1 e j,b * j ′ 1 , w c ).

Lemma 5. If there exists an adversary A that distinguishes
Game 5 and Game 6 with advantage ϵ, then there exists an adversary B that breaks assumption 2 with advantage ϵ.
Proof. We observe that the difference between Game 5 and Game 6 is that the element e(v a n 1 , w c ) in Game 5 is replaced by a random element in Game 6. B receives an instance (N, G p , G 1 , G 1 , G 2 , G T , e, g 1 , g a 1 , . . . , g a n− 1 1 , g 2 , g c 2 , T), where T is either equal to e(g a n 1 , g c 2 ) or a random element of G T . en, B simulates Game 5 except that y α � T n j�1 e j,b * j ′ . A outputs α ′ ,. If α � α ′ , B outputs 0, which indicates that T � e(g a n 1 , g c 2 ); else, B outputs 1, which implies that T is a random element from G T .
We observe that both y 0 and y 1 are chosen randomly from G T in Game 6. is completes the proof of eorem 1. □

Proof of Lemma 2.
e major difference between Game 2 and Game 3 is the 'challenge partition' inputs x where H u (x) � 0. erefore, in order to show that for any PPT adversary A, the outputs of Game 2 and Game 3 are indistinguishable; we give a sequence of subexperiments Game 2 A to Game 2 F and prove that any PPT attacker's advantage in each game must be negligible close to the previous one. We omit the previous experiment Game 2 and describe the intermediate experiments. In the first game, we change the secret key such that the circuit computes the output in a different manner and the output is the same as in the original circuit. Next, using the weak bilinear Diffie-Hellman inversion assumption, we modify the constants hardwired in the program such that the output of all challenge partition inputs is changed. Essentially, a different base for the challenge partition is used in the two programs, respectively. Finally, using Subgroup Hiding Assumption and Chinese Remainder eorem, we can change the exponents for the challenge partition and ensure that the original circuit (in Game 2) and final circuit (in Game 3) use different secret keys for the challenge partition.
(2) e adversary A makes an evaluation query Proof. We observe the difference between Game 2 and Game 2 A is the manner in which d i,b are chosen. In Game 2, d i,b are chosen randomly from Z N , while in Game 2 A , the challenger first chooses d i,b ′ ⟵ Z N and a ⟵ Z N and sets  is game is the same as the previous one except the hardwire of the circuit is changed. e domain is divided into two disjoint sets by the admissible hash function. When H u (x) � 0, all elements d i,b used to compute function values y contain a factor a. erefore, the related function values can be computed by v ′ � v a n . On the other hand, only some elements d i,b used to compute function values y contain the factor a when H u (x) � 1. erefore, the related function values can only be computed by (v, v a , . . . , v a n− 1 ).

Lemma 8.
If there exists an adversary A that distinguishes the Game 2 B and Game 2 C with advantage ϵ, then there exists an adversary B that breaks assumption 2 with advantage ϵ.
Proof. We observe that the difference between Game 2 B and Game 2 C is that the term v a n is replaced by a random element of G 1 . is proof is similar to the proof of Lemma 5. □

Game 2 D .
is game is the same as the previous one except that v is chosen randomly from the subgroup G p , and v ′ is chosen randomly from the subgroup G q in Step 1.

Lemma 9.
Assuming assumption 1 holds, Game 2 C and Game 2 D are computationally indistinguishable.
Proof. We introduce an intermediate experiment 2 C 1 and show that Game 2 C 1 and 2 C are computationally indistinguishable. Similarly, Game 2 C 1 and Game 2 D are computationally indistinguishable.
Game 2 C 1 is the same as Game 2 C except that v is chosen from G p . Suppose that there exists an adversary A which can distinguish Game 2 C 1 and 2 C , we construct an adversary B that breaks assumption 1 . B receives (N, G p , G q , G 1 , G 2 ,  G T , e, T), w ⟵ G 2 , a, c ∈ Z N , and (d 1,0 ′ , d 1,1 ′ ), . . . , (d n,0 ′ , d n,1 ′ ) ∈ Z 2 N , and computes V ⟶ , D, and pk as in Game 2 C . en, B runs the rest steps as in Game 2 C . At last, A outputs α ′ , if α � α ′ and B guesses T ∈ G 1 , else B guesses T ∈ G p . Note that B simulates exactly Game 2 C when T ∈ G 1 , and B simulates exactly Game 2 C 1 when T ∈ G p . erefore, if there exists an adversary A that distinguishes the outputs 2 C and 2 C 1 with advantage ϵ, then there exists an adversary B that breaks the assumption 1.
is game is the same as the previous one except that the secret key is divided into two parts sk and sk ′ . If H u (x) � 0, the related function values are computed by sk ′ . Else, the related function values are computed by sk.
e constrained key sk x * is computed as in Game 2 D .
Game 2 D 2 is the same as the game 2 D 1 , except that the constrained key sk x * is computed by the circuit C 4 . □ Claim 1. Assuming A is a secure indistinguishability obfuscator, Game α ′ and Game α ′ � α are computationally indistinguishable.
Proof. We construct a PPT adversary iO that uses 2 A to break the security of 2 B . 2 C runs Step 1 and Step 3 as in Game v ′ . On receiving the challenge point G 1 , y 0 � e , w c ) sets A and 2 B as in 2 C and constructs circuits B and 2 B . en, he sends 2 C to the v a n challenger and receives G 1 . 2 D computes G p as in Game v ′ and sends G q to 2 C . 2 D returns 2 C 1 , if 2 C 1 , 2 C outputs 0, else outputs 1.
Next, we only show that the circuit 2 C 1 and 2 D have the identical functionality. For any 2 C 1 such that 2 C . For any G p such that A. erefore, the two circuits are functionally equivalent. Hence, if there exists an adversary that can distinguish the two games, then we can construct an adversary 2 C 1 that breaks the 2 C security.
, D, and pk in Game 2 C . Since B, a is invertible with overwhelming probability. erefore, 2 C is a uniform element from A and α ′ is also a uniformly random element from α � α ′ in Game B. It follows that the two experiments are statistically indistinguishable.
is game is the same as the previous one except that the generation way of secret key sk ′ is different to the one of sk.
(2) e adversary A makes an evaluation query , w c ) and y 1 ⟵ G T , and returns (pk, sk x * , y α ) to the adversary A. Proof.
e proof method is similar to Lemma 9.

Constrained Verifiable Random Function
In this section, we give our construction of constrained verifiable random function with polynomial size of the constrained set. We embed the puncturable VRFs in the constrained VRFs. Informally, our algorithm works as follows: e setup algorithm is the same as the puncturable VRFs. e constrained key sk S for the subset S is a circuit which has the secret key sk hardwired in it. On input a value x, the circuit computes the function value and proof by the puncturable VRFs if x ∉ S. e verifiable algorithm is the same as the puncturable VRFs. When proving the pseudorandomness, we translate puncturable VRFs into constrained VRFs with polynomial size of the constrained set by means of hybrid argument. Once the adversary queries the constrained key for the polynomial set S 1 , the challenger can guess the challenge point x * with a probability of 1/|S 1 |. Subsequently, the secret key sk can be replaced by a constrained key sk x * of puncturable VRFs. Via a hybrid argument, we reduce pseudorandomness of puncturable VRFs to the pseudorandomness of constrained VRFs. Let F : K × X ⟶ Y be a puncturable VRF (Setup, Puncture, Prove, and Verify), and P : K × X ⟶ G 1 be a function of generation proof. We construct constrained VRFs (F.Setup, F.Constrain, F.Prove, and F. Verify) by invoking the puncturable VRFs: (i) F.Setup(1 λ ) ⟶ (pk, sk) : Run the algorithm (pk 1 , sk 1 ) ⟵ Setup(1 λ ). Set pk � pk 1 and sk � sk 1 .
is algorithm takes the secret key sk and the constrained set S as inputs, where |S| � poly and computes an obfuscation of a circuit C sk,S defined as in Figure 9. C sk,S has the secret key, the function descriptions F and P, and the constrained set S hardwired in it. Sets sk S ⟵ iO(C sk , S) where C sk,S is padded to be of appropriate size. (iii) F.Prove(sk S , x) ⟶ (y, π) or (⊥, ⊥) : e constrained key sk S is a program that takes x as the input. Define F.Prove(sk S , x) � sk S (x).
(iv) F.Verify(pk, x, y, π) ⟶ 0, 1 { } : is algorithm is the same as Verify. e provability and uniqueness follow from the puncturable VRFs. We omit the detailed description. Next, we show that this construction satisfies the pseudorandomness defined in Section 3.

Theorem 2.
Assuming iO is a secure indistinguishability obfuscator and (Setup, Puncture, Prove, and Verify) is a secure puncturable VRF. en, the construction defined above satisfies the pseudorandomness.
Proof. Without loss of generality, we assume the adversary makes q 1 evaluation queries and q 2 constrained queries. We present a full description of each game and underline the changes from the presented one to the previous one. □

Game 1.
e first game is the original security game for our construction. Here, the challenger first chooses a pair constrained VRF key (pk, sk). en, A makes evaluation queries and constrained key queries and outputs a challenge point. e challenger responds with either a VRF evaluation or a random element.
(i) e challenger chooses b ⟵ 0, 1 { } and then generates (pk, sk) by running the algorithm F.Setup (1 λ ) (ii) e adversary makes evaluation queries or constrained queries: (1) If A sends an evaluation query x i , then output (F(sk, x i ), P(sk, x i )) (2) If A sends a constrained key query for S j , output the constrained key sk S j ⟵ iO(C sk,S j ) (iii) A sends a challenge query x * such that x * ≠ x i for all i ≤ q 1 and x * ∈ S j for all j ≤ q 2 . en, the challenger sets y 0 � F(sk, x * ) and y 1 ⟵ Y and outputs (y b , pk) (iv) A outputs b ′ and wins if b � b ′

Game 2.
is game is the same as the previous one except that we introduce an abort condition. When the adversary A makes the first constrained query S 1 , the challenger guesses a challenge query x ′ ∈ S 1 . If the last q 2 − 1 queries S j does not contain x ′ , the experiment aborts. In addition, the experiment aborts if x ′ ≠ x * , where x * is the challenge query.
(ii) e adversary makes evaluation queries or constrained queries: For the first constrained query S 1 , the challenger chooses x ′ ⟵ S 1 and output sk S 1 ⟵ iO(C sk,S 1 ). For all evaluation queries x i before the first constrained query, the challenger outputs (F(sk, x i ), P(sk, x i )). For all queries after the first constrained query, the challenger does as follows: (1) If A sends an evaluation query x i such that (2) If A sends a constrained key query for S j such that x i ∉ S j , the experiment aborts. Else, output sk S j ⟵ iO(C sk,S j ).
(iii) A sends a challenge query x * such that x * ≠ x i for all i ≤ q 1 , and x * ∈ S j for all j ≤ q 2 . If x * ≠ x ′ , the experiment aborts. Else, the challenger sets y 0 � F(sk, x * ) and y 1 ⟵ Y and outputs (y b , pk).

Lemma 13. For any PPT adversary A, if
A wins with advantage ϵ in Game 1, then it wins with advantage ε/|S 1 | in Game 2.
Proof. According the pseudorandomness defined in Section 3, the challenge point belongs to the constrained set. e two experiments are equal if Game 2 does not abort. Since the challenger guesses correctly with probability 1/|S 1 |, if A wins with advantage ϵ in Game 1, then it wins with advantage ε/|S 1 | in Game 2. □ 5.3. Game 2 i . For 0 ≤ i ≤ q 2 , the experiment is the same as the previous one except that the constrained queries use sk x′ instead of sk in the first i experiment. We observe that the Game 2 0 is equal to the Game 2.
(ii) e adversary makes evaluation queries or constrained queries: For the first constrained query S 1 . e challenger chooses x ′ ⟵ S 1 , computes sk x′ ⟵ iO(C sk x′ ,S 1 ) and π ′ � Puncture(sk, x ′ ), and outputs sk S 1 ⟵ iO(C sk x′ ,S 1 ), where the description of the circuit C sk x′ ,S 1 is given in Figure 10. For all evaluation queries x i before the first constrained query, the challenger outputs (F(sk, x i ), P(sk, x i )). For all queries after the first constrained query, the challenger does as follows: (1) If A sends an evaluation query x i such that x i � x ′ , the experiment aborts. Else, if x i ≠ x ′ , output (F(sk, x i ), P(sk, x i )) � Prove(sk x′ , x i ) (2) If A sends a constrained key query for S j such that x i ∉ S j , the experiment aborts. Else, if j ≤ i output sk S j ⟵ iO(C sk x′ ,S j ), else output sk S j ⟵ iO(C sk,S j ), where the description of the circuit C sk x′ ,S j is given in Figure 10.
(iii) A sends a challenge query x * such that x * ≠ x i for all i ≤ q 1 and x * ∈ S j for all j ≤ q 2 . If x * ≠ x ′ , the experiment aborts. Else, the challenger sets y 0 � F(sk, x * ) and y 1 ⟵ Y, computes, and outputs (y b , pk). (iv) A outputs b ′ and wins if b � b ′ .

Lemma 14.
Assuming iO is a secure indistinguishability obfuscator, Game 2 i− 1 and 2 i are computationally indistinguishable.
Proof. We observe that the difference between Game 2 i− 1 and 2 i respond to the i ′ th constrained query. In Game 2 i− 1 , sk S i ⟵ iO(C sk,S i ), while in Game 2 i , sk S i ⟵ iO(C sk x′ ,S i ).
In order to prove that the two games are indistinguishable, we only need to show that the circuit C sk,S i and C sk x′ ,S i are functionally identical.
(i) If x ∈ S i , both circuits output (⊥, ⊥) (ii) For any input x ∉ S i , C sk,S i (x) � Prove(sk, x) � Prove(sk x′,S i , x) � C sk x′ ,S i (x) erefore, by the security of iO, the two experiments are indistinguishable.

Game 3.
is game is the same as game 2 q 2 except that y 0 is replaced by a random element from Y.
(ii) e adversary makes evaluation queries or constrained queries: For the first constrained query S 1 , the challenger chooses x ′ ⟵ S 1 and output sk S 1 ⟵ iO(C sk,S 1 ). For all evaluation queries x i before the first constrained query, the challenger outputs (F(sk, x i ), P(sk, x i )). For all queries after the first constrained query, the challenger does as follows: (1) If A sends an evaluation query x i such that x i � x ′ , the experiment aborts. Else, if x i ≠ x ′ , output (F(sk, x i ), P(sk, x i )) (2) If A sends a constrained key query for S j such that x i ∉ S j , the experiment aborts. Else, output sk S j ⟵ iO(C sk,S j ) (iii) A sends a challenge query x * such that x * ≠ x i for all i ≤ q 1 and x * ∈ S j for all j ≤ q 2 . If x * ≠ x ′ , the experiment aborts. Else, the challenger sets y 0 ⟵ Y and y 1 ⟵ Y and outputs (y b , pk). (iv) A outputs b ′ and wins if b � b ′ .
Input: a value x ∈ X Constants: the function description F and P, the secret key sk, the constrained set S ∈ X 1. if S ∈ X, output (⟂, ⟂); 2. else, output y = F(sk, x), π = P(sk, x). Proof. We prove that if there exists an adversary A that distinguishes the Game 2 q 2 and Game 3, then there exists another adversary B that breaks the security of puncturable VRFs.
B can simulate perfect experiment for A. For each evaluation query x before the first constrained key query, B sends x to the puncturable VRFs ′ challenger and returns (y, π) to A. When A queries the constrained key S 1 , B chooses x ′ ∈ S 1 , sends x ′ to the challenger, and receives (sk x′ , pk, and y).
en, B uses sk x′ to respond the remaining queries. On receiving the challenge input x * , B checks x ′ � x * and outputs y. B outputs the response of A. We observe that if y is chosen randomly, then B simulates Game 3, else it simulates Game 2 q 2 . erefore, Game 2 q 2 and Game 3 are computationally indistinguishable.
We observe that both y 0 and y 1 are chosen randomly from Y. erefore, for any PPT adversary A, it has negligible advantage in Game 3.
is completes the proof of eorem 2.

Conclusion
In this work, we construct a novel constrained VRF for polynomial size set and give the proof of security under a new secure definition which is called semiadaptive security. Meanwhile, our construction is based on bilinear maps, which avoid the application of multilinear maps. Although it does not satisfy full adaptive security, it has solved some problems compared with selective security, which allows the adversary to query the evaluation oracle before it outputs the challenge point. To construct a fully adaptive security constrained VRFs is our future work.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e authors declare that they have no conflicts of interest.