Recent Trends in Special Numbers and Special Functions and Polynomials

1Department of Economics, Faculty of Economics, Administrative and Social Sciences, Hasan Kalyoncu University, 27410 Gaziantep, Turkey 2Department of Mathematics, Faculty of Arts and Sciences, University of Gaziantep, 27310 Gaziantep, Turkey 3Department of Mathematics, Dokuz Eylül University, 35160 Izmir, Turkey 4Department of Mathematics and Statistics, University of Victoria, Victoria, BC, Canada V8W 3R4 5Department of Mathematics, Howard University, 2441 6th, Street NW, Washington, DC 20059, USA


Introduction
In 1982, Koutras [1] introduced the noncentral Stirling numbers of the first and second kind as a natural extension of the definition of the classical Stirling numbers, namely, the expression of the factorial ( ) in terms of powers of and vice versa. These numbers are, respectively, denoted by ( , ) and ( , ) which are defined by means of the following inverse relations: where , are any real numbers, is a nonnegative integer, and The numbers satisfy the following recurrence relations: ( + 1, ) = ( , − 1) + ( − ) ( , ) , ( + 1, ) = ( , − 1) + ( − ) ( , ) and have initial conditions It is worth mentioning that for a given negative binomial distribution and the sum = 1 + 2 + ⋅ ⋅ ⋅ + of independent random variables following the logarithmic distribution, the numbers ( , ) appeared in the distribution of the sum = + , while the numbers ( , ) appeared in the distribution of the sum =̂+̂wherê is the sum of independent random variables following the truncated Poisson distribution away from zero and̂is a Poisson random variable. More precisely, the probability distributions of and are given, respectively, by For a more detailed discussion of noncentral Stirling numbers, one may see [1]. Determining the location of the maximum of Stirling numbers is an interesting problem to consider. In [2], Mezö 2 International Journal of Mathematics and Mathematical Sciences obtained results for the so-called -Stirling numbers which are natural generalizations of Stirling numbers. He showed that the sequences of -Stirling numbers of the first and second kinds are strictly log-concave. Using the theorem of Erdös and Stone [3] he was able to establish that the largest index for which the sequence of -Stirling numbers of the first kind assumes its maximum is given by the approximation (1) , = + [log ( Following the methods of Mezö, we establish strict logconcavity and hence unimodality of the sequence of noncentral Stirling numbers of the first kind and, eventually, obtain an estimating index at which the maximum element of the sequence of noncentral Stirling numbers of the first kind occurs.

Explicit Formula
In this section, we establish an explicit formula in symmetric function form which is necessary in locating the maximum of noncentral Stirling numbers of the first kind.
Then, for ≥ 3 and using (9), we get Then, we have the following lemma.
The explicit formula in Theorem 3 is necessary in locating the peak of the distribution of noncentral Stirling numbers of the first kind. Besides, this explicit formula can also be used to give certain combinatorial interpretation of ( , ).
A 0-1 tableau, as defined in [4] by de Médicis and Leroux, is a pair = ( , ), where is a partition of an integer , and = ( ) 1≤ ≤ is a "filling" of the cells of corresponding Ferrers diagram of shape with 0's and 1's, such that there is exactly one 1 in each column. Using the partition = (5, 3, 3, 2, 1) we can construct 60 distinct 0-1 tableaux. One of these 0-1 tableaux is given in the following figure with 14 Also, as defined in [4], an -tableau is a list of column of a Ferrers diagram of a partition (by decreasing order of length) such that the lengths | | are part of the sequence = ( ) ≥0 , ∈ + ∪ {0}. If (ℎ, ) is the set of -tableaux with exactly distinct columns whose lengths are in the set = { 0 , 1 , . . . , ℎ }, then | (ℎ, )| = ( ℎ+1 ). Now, transforming each column of an -tableau in ( − 1, − ) into a column of length (| |), we obtain a new tableau which is called -tableau. If (| |) = | |, then the -tableau is simply the -tableau. Now, we define an (0, 1)-tableau to be a 0-1 tableau which is constructed by filling up the cells of an -tableau with 0's and 1's such that there is only one 1 in each column. We use (0,1) ( − 1, − ) to denote the set of such (0, 1)-tableaux. It can easily be seen that every ( − ) combination Thus, using (29), we can easily prove the following theorem.
If = 1 + 2 for some 1 and 2 , then, with * ( ) = 2 − , Suppose is the set of all -tableaux corresponding to such that for each ∈ either has no column whose weight is 1 , or has one column whose weight is 1 , or . . .
Using the same argument above, we can easily obtain the convolution formula.

The Maximum of Noncentral Stirling Numbers of the First Kind
We are now ready to locate the maximum of ( , ). First, let us consider the following theorem on Newton's inequality [5] which is a good tool in proving log-concavity or unimodality of certain combinatorial sequences.
Theorem 10 (see [3]). Let 1 < 2 < ⋅ ⋅ ⋅ be an infinite sequence of positive real numbers such that Denote by ∑ , the sum of the product of the first of them taken at a time and denote by the largest value of for which ∑ , assumes its maximum value. Then We also need to recall the asymptotic expansion of harmonic numbers which is given by where is the Euler-Mascheroni constant.
The following theorem contains a formula that determines the value of the index corresponding to the maximum of the sequence {| ( , )|} =0 .

6
International Journal of Mathematics and Mathematical Sciences Theorem 11. The largest index for which the sequence {| ( , )|} =0 assumes its maximum is given by the approximation where [ ] is the integer part of and = − , < 0.
The signless Stirling numbers of the first kind [6] are the sum of all products of − different integers taken from {1, 2, 3, . . . , − 1}. That is, Using (53), we see that Therefore, we have Recently, a paper by Cakić et al. [7] established explicit formulas for multiparameter noncentral Stirling numbers which are expressible in symmetric function forms. One may then try to investigate the location of the maximum value of these numbers using the Erdös-Stone theorem.

Introduction
Elementary function libraries are often called from performance-critical code sections and hence contribute greatly to the efficiency of numerical applications, and the performance of these and libraries for linear algebra largely determine the performance of important applications. Current hardware trends impact this performance as (i) longer pipelines and wider superscalar dispatch favour implementations which distribute computation across different execution units and present the compiler with more opportunities for parallel execution but make branches more expensive; (ii) Single-Instruction-Multiple-Data (SIMD) parallelism makes handling cases via branches very expensive; (iii) memory throughput and latency which are not advancing as fast as computational throughput hinder the use of lookup tables; (iv) power constraints limit performance more than area.
The last point is interesting and gives rise to the notion of "dark silicon" in which circuits are designed to be un-or underused to save power. The consequences of these thermal limitations versus silicon usage have been analyzed [1], and a number of performance-stretching approaches have been proposed [2] including the integration of specialized coprocessors.
Our proposal is less radical: instead of adding specialized coprocessors, add novel fully pipelined instructions to existing CPUs and GPUs, use the existing register file, reuse existing silicon for expensive operations, for example, fused multiply-add operations, and eliminate costly branches but add embedded lookup tables which are a very effective use of dark silicon. In the present paper, we demonstrate this approach for elementary function evaluation, that is, libm functions and especially vector versions of them.
To optimize performance, our approach takes the successful accurate table approach of Gal et al. [3,4] coupled with algorithmic optimizations [5,6] and branch and table unifications [7] to reduce all fixed-power-, logarithm-, and exponential-family functions to a hardware-based lookup 2 International Journal of Mathematics and Mathematical Sciences followed by a handful of floating-point operations, mostly fused multiply-add instructions evaluating a single polynomial. Other functions like pow require multiple such basic stages, but no functions require branches to handle exceptional cases, including subnormal and infinite values.
Although fixed powers (including square roots and reciprocals) of most finite inputs can be efficiently computed using Newton-Raphson iteration following a software or hardware estimate [8], such iterations necessarily introduce NaN intermediate values, which can only be corrected using additional instructions (branches, predication, or selection). Therefore, our proposed implementations avoid iterative methods.
For evaluation of the approach, the proposed instructions were implemented in a Cell/B.E. [9] SPU simulator, and algorithms for a standard math function library were developed that leverage these proposed additions. Overall, we found that the new instructions would result in an average 2.5 times throughput improvement on this architecture versus current published performance results (Mathematical Acceleration Subsystem, 5.0, IBM). Given the simple data dependency graphs involved, we expect similar improvements from implementing these instructions on all two-way SIMD architectures supporting fused multiply-add instructions. Higherway SIMD architectures would likely benefit more.
In the following, the main approach is developed, and the construction of two representative functions, log and log( + 1), is given in detail, providing insight by example into the nature of the implementation. In some sense these represent the hardest case; although the trigonometric functions require multiple tables, and there is some computation of the lookup keys, the hardware instructions themselves are simpler. For a complete specification of the algorithms used, see [10].

New Instructions
Driven by hardware floating-point instructions, the advent of software pipelining and shortening of pipeline stages favoured iterative algorithms (see, e.g., [11]); the long-running trend towards parallelism has engendered a search for shared execution units [12] and in a more general sense a focus on throughput rather than low latency. In terms of algorithmic development for elementary functions, this makes combining short-latency seed or table value lookups with standard floating-point operations attractive, exposing the whole computation to software pipelining by the scheduler.
In proposing Instruction Set Architecture (ISA) extensions, one must consider four constraints: (i) the limit on the number of instructions imposed by the size of the machine word, and the desire for fast (i.e., simple) instruction decoding, (ii) the limit on arguments and results imposed by the architected number of ports on the register file, (iii) the limit on total latency required to prevent an increase in maximum pipeline depth, (iv) the need to balance increased functionality with increased area and power usage. As new lithography methods cause processor sizes to shrink, the relative cost of increasing core area for new instructions is reduced. The net cost may even be negative if the new instructions can reduce code and data size, thereby reducing pressure on the memory interface (which is more difficult to scale). To achieve a performance benefit, ISA extensions should do one or more of the following: (i) reduce the number of machine instructions in compiled code, (ii) move computation away from bottleneck execution units or dispatch queues, or (iii) reduce register pressure.
Considering the above limitations and ideals, we propose adding two instructions, the motivation for which follows below: d = fmaX a b c: an extended-range floating-point multiply-add, with the first argument having 12 exponent bits and 51 mantissa bits, and nonstandard exception handling; t1 = lookup a b f t: an enhanced table lookup based on one or two arguments, and containing immediate argument specifying the function number and the sequence of the lookup, for example, the first lookup used for range reduction or the second lookup used for function reconstruction.
It is easiest to see them used in an example. Figure 1 describes the data flow graph (omitting register constants), which is identical for a majority of the elementary functions. The correct lookup is specified as an immediate argument to lookup, the final operation being fma for the log functions and fm otherwise. All of the floating-point instructions also take constant arguments which are not shown. For example, the fmaX takes an argument which is −1.
The dotted box in Figure 1 represents a varying number of fused multiply-adds used to evaluate a polynomial after the multiplicative range reduction performed by the fmaX. The most common size for these polynomials is order three, so the result of the polynomial (the left branch) is available four floating-point operations later (typically about 24-28 cycles) than the result 1/ . The second lookup instruction performs a second lookup, for example, the log looks up log 2 and substitutes exceptional results (±∞, NaN) when necessary. The final fma or fm instruction combines the polynomial approximation on the reduced interval with the table value.
International Journal of Mathematics and Mathematical Sciences 3 The gray lines indicate two possible data flows for three possible implementations: (i) the second lookup instruction is a second lookup, using the same input; (ii) the second lookup instruction retrieves a value saved by the first lookup (in final or intermediate form) from a FIFO queue; (iii) the second lookup instruction retrieves a value saved in a slot according to an immediate tag which is also present in the corresponding first lookup.
In the first case, the dependency is direct. In the second two cases the dependency is indirect, via registers internal to the execution unit handling the lookups. All instruction variations have two register inputs and one or no outputs, so they will be compatible with existing in-flight instruction and register tracking. On lean in-order architectures, the variants with indirect dependencies-(ii) and (iii)-reduce register pressure and simplify modulo loop scheduling. This would be most effective in dedicated computational cores like the SPUs in which preemptive context switching is restricted.
The variant (iii) requires additional instruction decode logic but may be preferred over (ii) because tags allow lookup instructions to execute in different orders, and for wide superscalar processors, the tags can be used by the unit assignment logic to ensure that matching lookup instructions are routed to the same units. On Very Long Instruction Word machines, the position of lookups could replace or augment the tag.
In low-power environments, the known long minimum latency between the lookups would enable hardware designers to use lower power but longer latency implementations of most of the second lookup instructions.
To facilitate scheduling, it is recommended that the FIFO or tag set be sized to the power of two greater than or equal to the latency of a floating-point operation. In this case, the number of registers required will be less than twice the unrolling factor, which is much lower than what is possible for code generated without access to such instructions. The combination of small instruction counts and reduced register pressure eliminate the obstacles to inlining of these functions.
We recommend that lookups be handled by either a load/store unit, or, for vector implementations with a complex integer unit, by that unit. This code will be limited by floating-point instruction dispatch, so moving computation out of this unit will increase performance.

Exceptional Values.
A key advantage of the proposed new instructions is that the complications associated with exceptional values (0, ∞, NaN, and values with over-or underflow at intermediate stages) are internal to the instructions, eliminating branches and predicated execution.
Iterative methods with table-based seed values cannot achieve this in most cases because (1) in 0 and ±∞ cases the iteration multiplies 0 by ∞ producing a NaN; (2) to prevent over/underflow for high and low input exponents, matched adjustments are required before and after polynomial evaluation or iterations.
By using the table-based instruction twice, once to look up the value used in range reduction and once to look up the value of the function corresponding to the reduction, and introducing an extended-range floating-point representation with special handling for exceptions, we can handle both types of exceptions without extra instructions.
In the case of finite inputs, the value 2 − / , such that returned by the first lookup is a normal extended-range value. In the case of subnormal inputs, extended-range are required to represent this lookup value because normal IEEE value would saturate to ∞. Treatment of the large inputs which produce IEEE subnormal as their approximate reciprocals can be handled (similar to normal inputs) using the extended-range representation. The extended-range number is biased by +2047, and the top binary value (4095) is reserved for ±∞ and NaNs and 0 is reserved for ±0 similar to IEEE floating point. When these values are supplied as the first argument of fmaX, they override the normal values, and fmaX simply returns the corresponding IEEE bit pattern. The second lookup instruction returns an IEEE double except when used for divide, in which case it also returns an extended-range result.
In Table 2, we summarize how each case is handled and describe it in detail in the following section. Each cell in Table 2 contains the value used in the reduction, followed by the corresponding function value. The first is given as an extended-range floating-point number which trades one bit of stored precision in the mantissa with a doubling of the exponent range. In all cases arising in this library, the extra bit would be one of several zero bits, so no actual loss of precision occurs. For the purposes of elementary function evaluation, subnormal extended-range floating-point numbers are not needed, so they do not need to be supported in the floatingpoint execution unit. As a result, the modifications to support extended-range numbers as inputs are minor.
Take, for example, the first cell in the table, recip computing 1/ for a normal positive input. Although the abstract values are both 2 − / , the bit patterns for the two lookups are different, meaning that 1/ must be representable in both formats. In the next cell, however, for some subnormal inputs, 2 − / is representable in the extended range, but not in IEEE floating point, because the addition of subnormal numbers makes the exponent range asymmetrical. As a result, the second value may be saturated to ∞. The remaining cells in this row show that, for ±∞ input, we return 0 from both lookups, but for ±0 inputs the first lookup returns 0 and the second lookup returns ±∞. In the last column we see that, for negative inputs, the returned values change the sign. This ensures that intermediate values are always positive and allows the polynomial approximation to be optimized to give correctly rounded results on more boundary cases. Both lookups return quiet NaN outputs for NaN inputs.

International Journal of Mathematics and Mathematical Sciences
Contrast this with the handling of approximate reciprocal instructions. For the instructions to be useful as approximations 0 inputs should return ∞ approximations and vice versa, but if the approximation is improved using Newton-Raphson, then the multiplication of the input by the approximation produces a NaN which propagates to the final result.
The other cases are similar in treating 0 and ∞ inputs specially. Noteworthy variations are that log 2 multiplicatively shifts subnormal inputs into the normal range so that the normal approximation can be used and then additively shifts the result of the second lookup to compensate, and 2 returns 0 and 1 for subnormal inputs, because the polynomial approximation produces the correct result for the whole subnormal range.
In Table 2, we list the handling of exceptional cases. All exceptional values detected in the first argument are converted to the IEEE equivalent and are returned as the output of the fmaX, as indicated by subscript (for final). The subscripted NaNs are special bit patterns required to produce the special outputs needed for exceptional cases. For example, when fmaX is executed with NaN 1 as the first argument (one of the multiplicands) and the other two arguments are finite IEEE values, the result is 2 (in IEEE floating-point format): If the result of multiplication is an ∞ and the addend is ∞ with the opposite sign, then the result is zero, although normally it would be a NaN. If the addend is a NaN, then the result is zero. For the other values, indicated by " " in Table 2, fmaX operates as a usual fused multiply-accumulate except that the first argument (a multiplicand) is an extendedrange floating-point number. For example, the fused multiplication and addition of finite arguments saturate to ±∞ in the usual way.
Finally, for exponential functions, which return fixed finite values for a wide range of inputs (including infinities), it is necessary to override the range reduction so that it produces an output which results in a constant value after the polynomial approximation. In the case of exponential, any finite value which results in a nonzero polynomial value will do, because the second lookup instruction returns 0 or ∞ and multiplication by any finite value will return 0 as required.
Lookup Internals. The lookup instructions perform similar operations for each of the elementary functions we have considered. The function number is an immediate argument. In assembly language each function could be a different macro, while in high level languages the pair could be represented by a single function returning a tuple or struct.
A simplified data flow for the most complicated case, log 2 , is represented in Figure 2. The simplification is the elimination of the many single-bit operations necessary to keep track of exceptional conditions, while the operations to substitute special values are still shown. Critically, the diagram demonstrates that the operations around the core lookup operations are all of low complexity. The graph is explained in the following, where letter labels correspond to the blue colored labels in Figure 2. This representation is for variant (ii) or (iii) for the second lookup and includes a dotted line on the centre-right of the figure at (a), delineating a possible set of values to save at the end of the first lookup where the part of the data flow below the line is computed in the second lookup instruction.
Starting from the top of the graph, the input (b) is used to generate two values (c) and (d), 2 − / and +log 2 in the case of log 2 . The heart of the operation is two lookup operations (e) and (f), with a common index. In implementation (i) the lookups would be implemented separately, while in the shared implementations (ii) and (iii), the lookups could be implemented more efficiently together.
Partial decoding of subnormal inputs (g) is required for all of the functions except the exponential functions. Only the leading nonzero bits are needed for subnormal values, and only the leading bits are needed for normal values, but the number of leading zero bits (h) is required to properly form the exponent for the multiplicative reduction. The only switch (i) needed for the first lookup output switches between the reciprocal exponents valid in the normal and subnormal cases, respectively. Accurate range reduction for subnormal requires both extreme end points, for example, 1/2 and 1, because these values are exactly representable. As a result, two exponent values are required, and we accommodate this by storing an exponent bit (j) in addition to the 51 mantissa bits.
On the right hand side, the lookup (e) for the second lookup operation also looks up a 4-bit rotation, which also serves as a flag. We need 4 bits because the table size 2 12 implies that we may have a variation in the exponent of the leading nonzero bit of up to 11 for nonzero table values. This allows us to encode in 30 bits the floating mantissa used to construct the second lookup output. This table will always contain a 0 and is encoded as a 12 in the bitRot field. In all other cases, the next operation concatenates the implied 1 for this floating-point format. This gives us an effective 31 bits of significance (l), which is then rotated into the correct position in a 42-bit fixed point number. Only the high-order bit overlaps the integer part of the answer generated from the exponent bits, so this value needs to be padded. Because the output is an IEEE float, the contribution of the (padded) value to the mantissa of the output will depend on the sign of the integer exponent part. This sign is computed by adding 1 (m) to the biased exponent, in which case the high-order bit is 1 if and only if the exponent is positive. This bit (n) is used to control the sign reversal of the integer part (o) and the sign of the sign reversal of the fractional part, which is optimized by padding (p) after xoring (q) but before the +1 (r) required to negate a two's-complement integer.
The integer part has now been computed for normal inputs, but we need to switch (s) in the value for subnormal inputs which we obtain by biasing the number of leading zeros computed as part of the first step. The apparent 75-bit add (t) is really only 11 bits with 10 of the bits coming from padding on one side. This fixed-point number may contain leading zeros, but the maximum number is log 2 ((maximum integer part) − (smallest nonzero Recommended lookup/retrieve boundary requiring 46-bit storage Subs. 0b0. . .0   (v) (the mantissa is automatically zero). The final switches substitute special values for ±∞ and a quiet NaN. If the variants (ii) and (iii) are implemented, either the hidden registers must be saved on context/core switches, or such switches must be disabled during execution of these instructions, or execution of these instructions must be limited to one thread at a time.

Evaluation
Two types of simulations of these instructions were carried out. First, to test accuracy, our existing Cell/B.E. functional interpreter, see [13], was extended to include the new instructions. Second, we simulated the log lookups and fmaX using logical operations on bits, that is, a hardware simulation without timing, and verified that the results match the interpreter.
Performance. Since the dependency graphs (as typified by Figure 1) are close to linear and therefore easy to schedule optimally, the throughput and latency of softwarepipelined loops will be essentially proportional to the number of floating-point instructions. Table 1 lists the expected throughput for vector math functions with and without the new instructions. Figure 3 demonstrated the relative measured performance improvements. Overall, the addition of hardware instructions to the SPU results in a mean 2.5× throughput improvement for the whole function library. Performance improvements on other architectures will vary but would be similar, since the acceleration is primarily the result of eliminating instructions for handling exceptional cases.
Accuracy. We tested each of the functions by simulating execution for 20000 pseudorandom inputs over their natural ranges or [−1000 , 1000 ] for trigonometric functions and comparing each value to a 500-digit-precision Maple [14] reference. Table 1 shows a maximum 2.051 ulp error, with many functions close to correct rounding. This is well within the OpenCL specifications [15] and shows very good accuracy for functions designed primarily for high throughput and small code size. For applications requiring even higher accuracy, larger tables could be used and polynomials with better rounding properties could be searched for using the latticebasis-reduction method of [16].

Conclusion
We have demonstrated considerable performance improvements for fixed power, exponential, and logarithm calculations by using novel table lookup and fused multiply-add Table 2: (a) Values returned by lookup instructions, for IEEE floating-point inputs (−1) 2 , which rounds to the nearest integer = rnd((−1) 2 ). In case of exp2, inputs <−1074 are treated as −∞ and inputs >1024 are treated as ∞. For inputs <−1022, we create subnormal numbers for the second lookup. (b, c) Treatment of exceptional values by fmaX follows from that of addition and multiplication. The first argument is given by the row and the second by the column. Conventional treatment is indicated by a " " and unusual handling by specific constant values.  instructions in simple branch-free accurate-table-based algorithms. Performance improved less for trigonometric functions, but this improvement will grow with more cores and/or wider SIMD. These calculations ignored the effect of reduced power consumption caused by reducing instruction dispatch and function calls and branching and reducing memory accesses for large tables, which will mean that these algorithms will continue to scale longer than conventional ones. For target applications, just three added opcodes pack a lot of performance improvement, but designing the instructions required insights into the algorithms, and even a new algorithm [7] for the calculation of these functions. We invite experts in areas such as cryptography and data compression to try a similar approach.
The main purpose of this paper is to obtain explicit formulas for the Meixner polynomials of the first and second kinds.
In this paper we use a method based on a notion of composita, which was presented in [18].
The expression ( , ) is composita [19] and it is denoted by Δ ( , ). Below we show some required rules and operations with compositae.
In this paper we consider an application of this mathematical tool for the Bessel polynomials and the Meixner polynomials of the first and second kinds.

Bessel Polynomials
Krall and Frink [20] considered a new class of polynomials. Since the polynomials connected with the Bessel function, they called them the Bessel polynomials. The explicit formula for the Bessel polynomials is Then Carlitz [21] defined a class of polynomials associated with the Bessel polynomials by The ( ) is defined by the explicit formula [22] ( and by the following generating function: Using the notion of composita, we can obtain an explicit formula (9) from the generating function (10).

Meixner Polynomials of the First Kind
The Meixner polynomials of the first kind are defined by the following recurrence relation [23,24]: Using (16), we obtain the first few Meixner polynomials of the first kind: The Meixner polynomials of the first kind are defined by the following generating function [22]: Using the notion of composita, we can obtain an explicit formula ( ; , ) from the generating function (19).

International Journal of Mathematics and Mathematical Sciences
In 1923, Löwner [7] proved that the inverse of the Koebe function ( ) = /(1− ) 2 provides the best upper bounds for the coefficients of the inverses of analytic univalent functions. Although the estimates for the coefficients of the inverses of analytic univalent functions have been obtained in a surprisingly straightforward way (e.g., see [8, page 104]), the case turns out to be a challenge when the biunivalency condition is imposed on these functions. A function is said to be biunivalent in a given domain if both the function and its inverse are univalent there. By the same token, a function is said to be bistarlike in a given domain if both the function and its inverse are starlike there. Finding bounds for the coefficients of classes of biunivalent functions dates back to 1967 (see Lewin [9]). The interest on the bounds for the coefficients of subclasses of biunivalent functions picked up by the publications [10][11][12][13][14] where the estimates for the first two coefficients of certain classes of biunivalent functions were provided. Not much is known about the higher coefficients of the subclasses biunivalent functions as Ali et al. [13] also declared that finding the bounds for | |, ≥ 4, is an open problem. In this paper, we use the Faber polynomial expansions of the functions and ℎ = −1 in Σ( , ) to obtain bounds for their general coefficients | | and provide estimates for the early coefficients of these types of functions.
We will need the following well-known two lemmas, the first of which can be found in [15] (also see Duren [1]).

Lemma 1. Let
be so that ( ( )) > 0 for | | < 1. If ≥ −1/2, then Consequently, we have the following lemma, in which we will provide a short proof for the sake of completeness.
In the following theorem, we will observe the unpredictability of the early coefficients of the functions and its inverse map ℎ = −1 in Σ[ , ], providing an estimate for the general coefficients of such functions.

International Journal of Mathematics and Mathematical Sciences
Now, in light of (22), we conclude that Once again, solving for 2 0 and taking the square root of both sides, we obtain Now, the first part of Theorem 3 follows since for 2 − > 1 it is easy to see that Adding the equations in (23) and using the fact that 1 = − 1 , we obtain Dividing by 4 and taking the absolute values of both sides yield On the other hand, from the second equations in (22) and (23), we obtain Taking the absolute values of both sides and applying Lemma 2, it follows that This concludes the second part of Theorem 3 since for 0 < < 1 − 2 we have Substituting (22) in (23), we obtain Following a simple algebraic manipulation, we obtain the coefficient body Finally, for = 0, 0 ≤ ≤ − 1, (20) yields Solving for and taking the absolute values of both sides, we obtain
Authors like Saitoh [1] and Owa [2,3] had previously studied the properties of the class of functions (0) ( , , ; 1). studied the extreme points, coefficient bounds, and radius of univalency of the same class of functions. They obtained the following theorem among other results.
Recently, Hayami et al. [5] studied the coefficient estimates of the class of function ( ) ∈ in the open unit disc . They derived results based on properties of the class of functions ( ) ∈ [ , ], ̸ = 0. Xu et al. [6] used the principle of differential subordination and the Dziok-Srivastava convolution operator to investigate some analytic properties of certain subclass of analytic functions. We also note that Stanciu et al. [7] used the properties of the class of functions ( ) ∈ [ , ], ̸ = 0, to investigate the analytic and univalent properties of the following integral operator: where ( ∈ C, ∈ C \ {0}, ∈ , ∈ [1, ]). Motivated by the work in [1][2][3][4][5][6][7], we used the properties of the class of function ( ) ∈ [ , ], ̸ = 0, to investigate the coefficient estimates of the class of functions ( ) ∈ ( ) in the open unit disc . We also use the principle of differential subordination to investigate some properties of the class of functions ( ) ( , , ; ).
We state the following known results required to prove our work.

Remark 7. The combination (i) and (ii) of Lemma 6 gives
Remark 8. For convenience, we limit our result to the principal branch and otherwise stated the constrains on , , , , , and which remain the same throughout this paper.

Coefficient Bounds of the Class of Functions ( ) ( , , ; )
We begin with the following result.
By Theorem 3, and (19) can be written as which yields and so the expression (17).
Proof. It is as in Theorem 9.

Corollary 16. Let
Proof. The result follows from Theorem 15.

Introduction
A meromorphic function is a single-valued function that is analytic in all but possibly a discrete subset of its domain, and at those singularities it must go to infinity like a polynomial (i.e., these exceptional points must be poles and not essential singularities). A simpler definition states that a meromorphic function ( ) is a function of the form where ( ) and ℎ( ) are entire functions with ℎ( ) ̸ = 0 (see [1, page 64]). A meromorphic function therefore may only have finite-order, isolated poles and zeros and no essential singularities in its domain. An equivalent definition of a meromorphic function is a complex analytic map to the Riemann sphere. For example, the gamma function is meromorphic in the whole complex plane C.
In the present paper, we initiate the study of functions which are meromorphic in the punctured disk * = { : 0 < | | < 1} with a Laurent expansion about the origin; see [2].
This paper is divided into two sections; the first introduces a new class of meromorphically analytic functions, which is defined by means of a Hadamard product (or 2 International Journal of Mathematics and Mathematical Sciences convolution) involving linear operator. The second section highlights some applications of the main results involving generalized hypergeometric functions.
As for the second result of this paper on applications involving generalized hypergeometric functions, we need to utilize the well-known Gaussian hypergeometric function. One denotes ( , ; ) the class of the function given bỹ for ̸ = 0, −1, −2, . . ., and ∈ C\{0}, where ( ) = ( + 1) +1 is the Pochhammer symbol. We note that is the well-known Gaussian hypergeometric function.
The subordination relation (13) in conjunction with (17) takes the following form: Definition 2. A function ∈ Σ of the form (6) is said to be in the class Σ ( , , , , ) if it satisfies the subordination relation (19) above.

Characterization and Other Related Properties
In this section, we begin by proving a characterization property which provides a necessary and sufficient condition for a function ∈ Σ of the form (6) to belong to the class Σ ( , , ) of meromorphically analytic functions.

Theorem 3. The function ∈ Σ is said to be a member of the class Σ ( , , ) if and only if it satisfies
The equality is attained for the function ( ) given by Proof. Let of the form (6) belong to the class Σ ( , , ). Then, in view of (12), we find that Putting | | = (0 ≤ < 1) and noting the fact that the denominator in the above inequality remains positive by virtue of the constraints stated in (13) for all ∈ [0, 1), we easily arrive at the desired inequality (20) by letting → 1.
Conversely, if we assume that the inequality (20) holds true in the simplified form (22), it can readily be shown that which is equivalent to our condition of theorem, so that ∈ Σ ( , , ), hence the theorem.
Theorem 3 immediately yields the following result.

Corollary 4.
If the function ∈ Σ belongs to the class Σ ( , , ), then where the equality holds true for the functions ( ) given by (21).
We now state the following growth and distortion properties for the class Σ ( , , ).
This completes the proof of Theorem 5.
We next determine the radii of meromorphic starlikeness and meromorphic convexity of the class Σ ( , , ), which are given by Theorems 6 and 7 below. Theorem 6. If the function defined by (6) is in the class Σ ( , , ), then is meromorphic starlike of order in the disk | | < 1 , where The equality is attained for the function ( ) given by (21).
Proof. It suffices to prove that For | | < 1 , we have Hence (32) holds true for or With the aid of (20) and (34), it is true to say that for fixed This completes the proof of Theorem 6.
Theorem 7. If the function defined by (6) is in the class Σ ( , , ), then is meromorphic convex of order in the disk | | < 2 , where The equality is attained for the function ( ) given by (21).
Proof. By using the same technique employed in the proof of Theorem 6, we can show that For | | < 1 and with the aid of Theorem 3, we have the assertion of Theorem 7.

Applications Involving Generalized Hypergeometric Functions
Theorem 8. The function ∈ Σ is said to be a member of the class Σ ( , , , , ) if and only if it satisfies The equality is attained for the function ( ) given by Proof. By using the same technique employed in the proof of Theorem 3 along with Definition 2, we can prove Theorem 8.
The following consequences of Theorem 8 can be deduced by applying (39) and (40) along with Definition 2.

Corollary 10. If the function defined by
The equality is attained for the function ( ) given by (40).
The equality is attained for the function ( ) given by (40).

Introduction
The Hurwitz-Lerch Zeta function Φ( , , ) which is one of the fundamentally important higher transcendental functions is defined by (see, e.g., [1, p. 121 et seq.]; see also [2] and [3, p. 194 et seq.]) The Hurwitz-Lerch zeta function contains, as its special cases, the Riemann zeta function ( ), the Hurwitz zeta function  Indeed, the Hurwitz-Lerch zeta function Φ( , , ) defined in (5) can be continued meromorphically to the whole complex -plane, except for a simple pole at = 1 with its residue 1. It is also well known that Motivated by the works of Goyal and Laddha [4], Lin and Srivastava [5], Garg et al. [6], and other authors, Srivastava et al. [7] (see also [8]) investigated various properties of a natural multiparameter extension and generalization of the Hurwitz-Lerch zeta function Φ( , , ) defined by (5) (see also [9]). In particular, they considered the following function: with Here, and for the remainder of this paper, ( ) denotes the Pochhammer symbol defined, in terms of the Gamma function, by It is being understood conventionally that (0) 0 := 1 and assumed tacitly that the Γ-quotient exists (see, for details, [10, p. 21 et seq.]). In terms of the extended Hurwitz-Lerch zeta function defined by (6), the following generalization of several known integral representations arising from (5) was given by Srivastava et al. [7] as follows: provided that the integral exists.
Definition 1. The function Ψ * or Ψ ( , ∈ N 0 ) involved in the right-hand side of (9) is the well-known Fox-Wright function, which is a generalization of the familiar generalized hypergeometric function ( , ∈ N 0 ), with numerator parameters 1 , . . . , and denominator parameters 1 , . . . , such that where the equality in the convergence condition holds true for suitably bounded values of | | given by Recently, Srivastava [12] introduced and investigated a significantly more general class of Hurwitz-Lerch zeta type so that, clearly, we have the following relationship: In its special case when where we have assumed further that R ( ) > 0 when = 0, | | ≦ 1 ( ̸ = 1) or R ( − ) > 0 when = 0, = 1, provided that the integral (16) exists. The function Θ ( , , ; ) was studied by Raina and Chhajed [13,Eq. (1.6)] and, more recently, by Srivastava et al. [14]. As a particular interesting case of the function Θ ( , , ; ), we recall the following function: The function Φ * ( , , ) was introduced by Goyal and Laddha [4] as follows: Another special case of the function Θ ( , , ; ) that is worthy to mention occurs when = = 1 and = 1. We have where the function ( , ) is the extended Hurwitz zeta function introduced by Chaudhry and Zubair [15].
In his work, Srivastava [12, p. 1489, Eq. (2.1)] also derived the following series representation of the function Φ provided that both sides of (22) exist.
provided that each member of (28) exists and being given by (9). Motivated by a number of recent works by the present authors [18][19][20] and also of those of several other authors [4-9, 21, 22], this paper aims to provide many new relationships involving the new family of the -generalized Hurwitz-Lerch zeta function Φ

Pochhammer Contour Integral Representation for Fractional Derivative
The most familiar representation for the fractional derivative of order of ( ) is the Riemann-Liouville integral [23] (see also [24][25][26]); that is, where the integration is carried out along a straight line from 0 to in the complex -plane. By integrating by part times, we obtain This allows us to modify the restriction R( ) < 0 to R( ) < (see [26]). Another representation for the fractional derivative is based on the Cauchy integral formula. This representation,  too, has been widely used in many interesting papers (see, e.g., the works of Osler [27][28][29][30]).
The relatively less restrictive representation of the fractional derivative according to parameters appears to be the one based on the Pochhammer's contour integral introduced by Tremblay [31,32]. Remark 4. In Definition 3, the function ( ) must be analytic at = −1 (0). However, it is interesting to note here that if we could also allow ( ) to have an essential singularity at = −1 (0), then (32) would still be valid. . ., which can directly be identified from the coefficient of the integral (32). However, by integrating by parts times the integral in (32) by two different ways, we can show that = −1, −2, . . . and = 0, 1, 2, . . . are removable singularities (see, for details, [31] Adopting the Pochhammer based representation for the fractional derivative modifies the restriction to the case when is not a negative integer. Now, by using (33) in conjunction with the series representation (22) for Φ ( 1 ,..., , 1 ,..., ) 1 ,..., ; 1 ,..., ( , , ; , ), we obtain the following important fractional derivative formula that will play an important role in our present investigation:

Important Results Involving Fractional Calculus
In this section, we recall six fundamental theorems related to fractional calculus that will play central roles in our work. Each of these theorems is a fundamental formula related to the generalized chain rule for fractional derivatives, the Taylor Theorem 6. Let ( −1 ( )) and (ℎ −1 ( )) be defined and analytic in the simply-connected region R of the complexplane and let the origin be an interior or boundary point of R. Suppose also that −1 ( ) and ℎ −1 ( ) are regular univalent functions on R and that ℎ −1 (0) = −1 (0). Let ∮ ( −1 ( ))d vanish over simple closed contour in R∪{0} through the origin. Then the following relation holds true: Relation (35) allows us to obtain very easily known and new summation formulas involving special functions of mathematical physics.
By applying the relation (35), Gaboury and Tremblay [34] proved the following corollary which will be useful in the next section.

Corollary 7.
Under the hypotheses of Theorem 6, let be a positive integer. Then the following relation holds true: where ( ) Next, in the year 1971, Osler [35] obtained the following generalized Taylor-like series expansion involving fractional derivatives.

(51)
Then, for arbitrary complex numbers ,], and for on 1 (1) defined by The case 0 < ≦ 1 of Theorem 9 reduces to the following form: Tremblay and Fugère [41] developed the power series of an analytic function ( ) in terms of the function ( − 1 )( − 2 ), where 1 and 2 are two arbitrary points inside the analyticity region R of ( ). Explicitly, they gave the following theorem.
Proof. By taking ( ) = Φ for > 0 and for on 1 (1) defined by provided that both sides of (73) exist.